[llvm] [RISCV] Clear vill for whole vector register moves in vsetvli insertion (PR #118283)
Luke Lau via llvm-commits
llvm-commits at lists.llvm.org
Thu Dec 5 08:47:36 PST 2024
https://github.com/lukel97 updated https://github.com/llvm/llvm-project/pull/118283
>From 501815aecdbb8db1674948baf12c7a88b71b0243 Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Mon, 2 Dec 2024 18:51:41 +0800
Subject: [PATCH 1/9] [RISCV] Clear vill for whole vector register moves in
vsetvli insertion
This is an alternative to #117866 that works by demanding a valid vtype instead using a separate pass.
The main advantage of this is that it allows coalesceVSETVLIs to just reuse an existing vsetvli later in the block.
To do this we need to first transfer the vsetvli info to some arbitrary valid state in transferBefore when we encounter a vector copy. Then we add a new vill demanded field that will happily accept any other known vtype, which allows us to coalesce these where possible.
Note we also need to check for vector copies in computeVLVTYPEChanges, otherwise the pass will completely skip over functions that only have vector copies and nothing else.
This is one part of a fix for #114518. We still need to check if there's other cases where vector copies/whole register moves that are inserted after vsetvli insertion.
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 61 +-
.../CodeGen/RISCV/inline-asm-v-constraint.ll | 2 +
llvm/test/CodeGen/RISCV/rvv/abs-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll | 4 +-
.../CodeGen/RISCV/rvv/calling-conv-fastcc.ll | 4 +
llvm/test/CodeGen/RISCV/rvv/calling-conv.ll | 4 +
llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/compressstore.ll | 4 +-
.../RISCV/rvv/constant-folding-crash.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/ctlz-vp.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll | 10 +-
llvm/test/CodeGen/RISCV/rvv/expandload.ll | 551 ++++-
.../CodeGen/RISCV/rvv/extract-subvector.ll | 19 +
.../rvv/fixed-vector-i8-index-cornercase.ll | 4 +-
.../RISCV/rvv/fixed-vectors-bitreverse-vp.ll | 2 +
.../rvv/fixed-vectors-calling-conv-fastcc.ll | 1 +
.../RISCV/rvv/fixed-vectors-calling-conv.ll | 1 +
.../RISCV/rvv/fixed-vectors-ceil-vp.ll | 13 +-
.../RISCV/rvv/fixed-vectors-ctpop-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-floor-vp.ll | 13 +-
.../RISCV/rvv/fixed-vectors-fmaximum-vp.ll | 22 +-
.../RISCV/rvv/fixed-vectors-fminimum-vp.ll | 22 +-
.../RISCV/rvv/fixed-vectors-fptrunc-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll | 2 +-
.../rvv/fixed-vectors-insert-subvector.ll | 2 +
.../RISCV/rvv/fixed-vectors-masked-gather.ll | 2057 ++++++++---------
.../rvv/fixed-vectors-masked-load-int.ll | 1 +
.../RISCV/rvv/fixed-vectors-nearbyint-vp.ll | 9 +-
.../rvv/fixed-vectors-reduction-mask-vp.ll | 30 +
.../RISCV/rvv/fixed-vectors-rint-vp.ll | 9 +-
.../RISCV/rvv/fixed-vectors-round-vp.ll | 13 +-
.../RISCV/rvv/fixed-vectors-roundeven-vp.ll | 13 +-
.../RISCV/rvv/fixed-vectors-roundtozero-vp.ll | 13 +-
.../RISCV/rvv/fixed-vectors-setcc-int-vp.ll | 3 +
.../RISCV/rvv/fixed-vectors-shuffle-concat.ll | 17 +-
.../rvv/fixed-vectors-shuffle-exact-vlen.ll | 2 +
.../rvv/fixed-vectors-shuffle-reverse.ll | 22 +-
.../rvv/fixed-vectors-shuffle-vslide1up.ll | 2 +-
.../fixed-vectors-strided-load-store-asm.ll | 1 +
.../RISCV/rvv/fixed-vectors-strided-vpload.ll | 3 +
.../RISCV/rvv/fixed-vectors-trunc-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-unaligned.ll | 68 +-
.../RISCV/rvv/fixed-vectors-vadd-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vmax-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vmaxu-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vmin-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vminu-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vpgather.ll | 2 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vpload.ll | 1 +
.../RISCV/rvv/fixed-vectors-vpmerge.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vsadd-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vsaddu-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vselect-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vssub-vp.ll | 1 +
.../RISCV/rvv/fixed-vectors-vssubu-vp.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/floor-vp.ll | 28 +-
.../test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll | 36 +-
.../test/CodeGen/RISCV/rvv/fminimum-sdnode.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll | 36 +-
.../RISCV/rvv/fold-scalar-load-crash.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll | 7 +-
llvm/test/CodeGen/RISCV/rvv/inline-asm.ll | 7 +
.../CodeGen/RISCV/rvv/insert-subvector.ll | 22 +
llvm/test/CodeGen/RISCV/rvv/llrint-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/lrint-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/masked-tama.ll | 3 +
llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll | 18 +-
.../test/CodeGen/RISCV/rvv/mscatter-sdnode.ll | 2 +-
.../RISCV/rvv/named-vector-shuffle-reverse.ll | 26 +-
llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/pr88576.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/rint-vp.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/round-vp.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll | 28 +-
.../RISCV/rvv/rv32-spill-vector-csr.ll | 1 +
.../CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll | 5 +
.../RISCV/rvv/rv64-spill-vector-csr.ll | 1 +
.../CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll | 5 +
.../test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll | 1 +
.../RISCV/rvv/rvv-peephole-vmerge-vops.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll | 12 +-
.../CodeGen/RISCV/rvv/sink-splat-operands.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll | 4 +
.../test/CodeGen/RISCV/rvv/strided-vpstore.ll | 1 +
.../RISCV/rvv/undef-earlyclobber-chain.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vadd-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vcpop.ll | 7 +
.../RISCV/rvv/vector-deinterleave-fixed.ll | 2 +-
.../CodeGen/RISCV/rvv/vector-deinterleave.ll | 12 +-
.../RISCV/rvv/vector-interleave-fixed.ll | 8 +-
.../RISCV/rvv/vector-interleave-store.ll | 2 +-
.../CodeGen/RISCV/rvv/vector-interleave.ll | 30 +-
.../RISCV/rvv/vector-reassociations.ll | 4 +
llvm/test/CodeGen/RISCV/rvv/vector-splice.ll | 24 +-
llvm/test/CodeGen/RISCV/rvv/vfabs-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/vfirst.ll | 7 +
llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll | 25 +-
.../RISCV/rvv/vfmadd-constrained-sdnode.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vfneg-vp.ll | 2 +-
.../RISCV/rvv/vfnmadd-constrained-sdnode.ll | 2 +-
.../RISCV/rvv/vfnmsub-constrained-sdnode.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vfpext-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vfsqrt-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/vl-opt.ll | 3 +-
.../CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll | 165 ++
.../CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll | 165 ++
llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vmfeq.ll | 24 +
llvm/test/CodeGen/RISCV/rvv/vmfge.ll | 24 +
llvm/test/CodeGen/RISCV/rvv/vmfgt.ll | 24 +
llvm/test/CodeGen/RISCV/rvv/vmfle.ll | 24 +
llvm/test/CodeGen/RISCV/rvv/vmflt.ll | 24 +
llvm/test/CodeGen/RISCV/rvv/vmfne.ll | 24 +
llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vmsbf.ll | 7 +
llvm/test/CodeGen/RISCV/rvv/vmseq.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsge.ll | 55 +
llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsgt.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsif.ll | 7 +
llvm/test/CodeGen/RISCV/rvv/vmsle.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsleu.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmslt.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsltu.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsne.ll | 54 +
llvm/test/CodeGen/RISCV/rvv/vmsof.ll | 7 +
.../CodeGen/RISCV/rvv/vmv.v.v-peephole.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll | 4 +
llvm/test/CodeGen/RISCV/rvv/vp-select.ll | 1 +
.../RISCV/rvv/vp-splice-mask-fixed-vectors.ll | 12 +
.../RISCV/rvv/vp-splice-mask-vectors.ll | 21 +
.../test/CodeGen/RISCV/rvv/vpgather-sdnode.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/vpload.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vpstore.ll | 1 +
.../CodeGen/RISCV/rvv/vreductions-mask-vp.ll | 37 +
.../RISCV/rvv/vrgatherei16-subreg-liveness.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vsadd-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vsaddu-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/vselect-int.ll | 1 +
llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll | 11 +-
.../CodeGen/RISCV/rvv/vsetvli-insert-O0.ll | 10 +-
llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vsext-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vzext-vp.ll | 2 +-
173 files changed, 3406 insertions(+), 1537 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 870e393b40411f..244a2d702bbbb4 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -195,6 +195,27 @@ static bool hasUndefinedPassthru(const MachineInstr &MI) {
return UseMO.getReg() == RISCV::NoRegister || UseMO.isUndef();
}
+/// Return true if \p MI is a copy that will be lowered to one or more vmvNr.vs.
+static bool isVecCopy(const MachineInstr &MI) {
+ static const TargetRegisterClass *RVVRegClasses[] = {
+ &RISCV::VRRegClass, &RISCV::VRM2RegClass, &RISCV::VRM4RegClass,
+ &RISCV::VRM8RegClass, &RISCV::VRN2M1RegClass, &RISCV::VRN2M2RegClass,
+ &RISCV::VRN2M4RegClass, &RISCV::VRN3M1RegClass, &RISCV::VRN3M2RegClass,
+ &RISCV::VRN4M1RegClass, &RISCV::VRN4M2RegClass, &RISCV::VRN5M1RegClass,
+ &RISCV::VRN6M1RegClass, &RISCV::VRN7M1RegClass, &RISCV::VRN8M1RegClass};
+ if (!MI.isCopy())
+ return false;
+
+ Register DstReg = MI.getOperand(0).getReg();
+ Register SrcReg = MI.getOperand(1).getReg();
+ for (const auto &RegClass : RVVRegClasses) {
+ if (RegClass->contains(DstReg, SrcReg)) {
+ return true;
+ }
+ }
+ return false;
+}
+
/// Which subfields of VL or VTYPE have values we need to preserve?
struct DemandedFields {
// Some unknown property of VL is used. If demanded, must preserve entire
@@ -221,10 +242,13 @@ struct DemandedFields {
bool SEWLMULRatio = false;
bool TailPolicy = false;
bool MaskPolicy = false;
+ // If this is true, we demand that VTYPE is set to some legal state, i.e. that
+ // vill is unset.
+ bool VILL = false;
// Return true if any part of VTYPE was used
bool usedVTYPE() const {
- return SEW || LMUL || SEWLMULRatio || TailPolicy || MaskPolicy;
+ return SEW || LMUL || SEWLMULRatio || TailPolicy || MaskPolicy || VILL;
}
// Return true if any property of VL was used
@@ -239,6 +263,7 @@ struct DemandedFields {
SEWLMULRatio = true;
TailPolicy = true;
MaskPolicy = true;
+ VILL = true;
}
// Mark all VL properties as demanded
@@ -263,6 +288,7 @@ struct DemandedFields {
SEWLMULRatio |= B.SEWLMULRatio;
TailPolicy |= B.TailPolicy;
MaskPolicy |= B.MaskPolicy;
+ VILL |= B.VILL;
}
#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
@@ -308,7 +334,8 @@ struct DemandedFields {
OS << ", ";
OS << "SEWLMULRatio=" << SEWLMULRatio << ", ";
OS << "TailPolicy=" << TailPolicy << ", ";
- OS << "MaskPolicy=" << MaskPolicy;
+ OS << "MaskPolicy=" << MaskPolicy << ", ";
+ OS << "VILL=" << VILL;
OS << "}";
}
#endif
@@ -503,6 +530,16 @@ DemandedFields getDemanded(const MachineInstr &MI, const RISCVSubtarget *ST) {
}
}
+ // In ยง32.16.6, whole vector register moves have a dependency on SEW. At the
+ // MIR level though we don't encode the element type, and it gives the same
+ // result whatever the SEW may be.
+ //
+ // However it does need valid SEW, i.e. vill must be cleared. The entry to a
+ // function, calls and inline assembly may all set it, so make sure we clear
+ // it for whole register copies.
+ if (isVecCopy(MI))
+ Res.VILL = true;
+
return Res;
}
@@ -1208,6 +1245,17 @@ static VSETVLIInfo adjustIncoming(VSETVLIInfo PrevInfo, VSETVLIInfo NewInfo,
// legal for MI, but may not be the state requested by MI.
void RISCVInsertVSETVLI::transferBefore(VSETVLIInfo &Info,
const MachineInstr &MI) const {
+ if (isVecCopy(MI) &&
+ (Info.isUnknown() || !Info.isValid() || Info.hasSEWLMULRatioOnly())) {
+ // Use an arbitrary but valid AVL and VTYPE so vill will be cleared. It may
+ // be coalesced into another vsetvli since we won't demand any fields.
+ VSETVLIInfo NewInfo; // Need a new VSETVLIInfo to clear SEWLMULRatioOnly
+ NewInfo.setAVLImm(0);
+ NewInfo.setVTYPE(RISCVII::VLMUL::LMUL_1, 8, true, true);
+ Info = NewInfo;
+ return;
+ }
+
if (!RISCVII::hasSEWOp(MI.getDesc().TSFlags))
return;
@@ -1296,7 +1344,8 @@ bool RISCVInsertVSETVLI::computeVLVTYPEChanges(const MachineBasicBlock &MBB,
for (const MachineInstr &MI : MBB) {
transferBefore(Info, MI);
- if (isVectorConfigInstr(MI) || RISCVII::hasSEWOp(MI.getDesc().TSFlags))
+ if (isVectorConfigInstr(MI) || RISCVII::hasSEWOp(MI.getDesc().TSFlags) ||
+ isVecCopy(MI))
HadVectorOp = true;
transferAfter(Info, MI);
@@ -1426,6 +1475,12 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
PrefixTransparent = false;
}
+ if (isVecCopy(MI) &&
+ !PrevInfo.isCompatible(DemandedFields::all(), CurInfo, LIS)) {
+ insertVSETVLI(MBB, MI, MI.getDebugLoc(), CurInfo, PrevInfo);
+ PrefixTransparent = false;
+ }
+
uint64_t TSFlags = MI.getDesc().TSFlags;
if (RISCVII::hasSEWOp(TSFlags)) {
if (!PrevInfo.isCompatible(DemandedFields::all(), CurInfo, LIS)) {
diff --git a/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll b/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll
index c04e4fea7b2c29..45bc1222c0677c 100644
--- a/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll
+++ b/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll
@@ -45,6 +45,7 @@ define <vscale x 1 x i8> @constraint_vd(<vscale x 1 x i8> %0, <vscale x 1 x i8>
define <vscale x 1 x i1> @constraint_vm(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1) nounwind {
; RV32I-LABEL: constraint_vm:
; RV32I: # %bb.0:
+; RV32I-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32I-NEXT: vmv1r.v v9, v0
; RV32I-NEXT: vmv1r.v v0, v8
; RV32I-NEXT: #APP
@@ -54,6 +55,7 @@ define <vscale x 1 x i1> @constraint_vm(<vscale x 1 x i1> %0, <vscale x 1 x i1>
;
; RV64I-LABEL: constraint_vm:
; RV64I: # %bb.0:
+; RV64I-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64I-NEXT: vmv1r.v v9, v0
; RV64I-NEXT: vmv1r.v v0, v8
; RV64I-NEXT: #APP
diff --git a/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll b/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll
index 163d9145bc3623..ee0016ec080e24 100644
--- a/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll
@@ -567,6 +567,7 @@ define <vscale x 16 x i64> @vp_abs_nxv16i64(<vscale x 16 x i64> %va, <vscale x 1
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -576,7 +577,6 @@ define <vscale x 16 x i64> @vp_abs_nxv16i64(<vscale x 16 x i64> %va, <vscale x 1
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
index 66a1178cddb66c..10d24927d9b783 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
@@ -3075,6 +3075,7 @@ define <vscale x 64 x i16> @vp_bitreverse_nxv64i16(<vscale x 64 x i16> %va, <vsc
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -3086,7 +3087,6 @@ define <vscale x 64 x i16> @vp_bitreverse_nxv64i16(<vscale x 64 x i16> %va, <vsc
; CHECK-NEXT: lui a2, 3
; CHECK-NEXT: srli a4, a3, 1
; CHECK-NEXT: slli a3, a3, 2
-; CHECK-NEXT: vsetvli a5, zero, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a4
; CHECK-NEXT: sub a4, a0, a3
; CHECK-NEXT: sltu a5, a0, a4
@@ -3158,11 +3158,11 @@ define <vscale x 64 x i16> @vp_bitreverse_nxv64i16(<vscale x 64 x i16> %va, <vsc
;
; CHECK-ZVBB-LABEL: vp_bitreverse_nxv64i16:
; CHECK-ZVBB: # %bb.0:
+; CHECK-ZVBB-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-ZVBB-NEXT: vmv1r.v v24, v0
; CHECK-ZVBB-NEXT: csrr a1, vlenb
; CHECK-ZVBB-NEXT: srli a2, a1, 1
; CHECK-ZVBB-NEXT: slli a1, a1, 2
-; CHECK-ZVBB-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-ZVBB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVBB-NEXT: sub a2, a0, a1
; CHECK-ZVBB-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll b/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
index 1c95ec8fafd4f1..0dc1d0c32ac449 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
@@ -1584,6 +1584,7 @@ define <vscale x 64 x i16> @vp_bswap_nxv64i16(<vscale x 64 x i16> %va, <vscale x
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -1593,7 +1594,6 @@ define <vscale x 64 x i16> @vp_bswap_nxv64i16(<vscale x 64 x i16> %va, <vscale x
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 1
; CHECK-NEXT: slli a1, a1, 2
-; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
@@ -1631,11 +1631,11 @@ define <vscale x 64 x i16> @vp_bswap_nxv64i16(<vscale x 64 x i16> %va, <vscale x
;
; CHECK-ZVKB-LABEL: vp_bswap_nxv64i16:
; CHECK-ZVKB: # %bb.0:
+; CHECK-ZVKB-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-ZVKB-NEXT: vmv1r.v v24, v0
; CHECK-ZVKB-NEXT: csrr a1, vlenb
; CHECK-ZVKB-NEXT: srli a2, a1, 1
; CHECK-ZVKB-NEXT: slli a1, a1, 2
-; CHECK-ZVKB-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-ZVKB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVKB-NEXT: sub a2, a0, a1
; CHECK-ZVKB-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll b/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
index a4e5ab661c5285..d824426727fd9a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
@@ -336,6 +336,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_i32(<vsca
; RV32-NEXT: add a1, a3, a1
; RV32-NEXT: li a3, 2
; RV32-NEXT: vs8r.v v16, (a1)
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v8, v0
; RV32-NEXT: vmv8r.v v16, v24
; RV32-NEXT: call ext2
@@ -374,6 +375,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_i32(<vsca
; RV64-NEXT: add a1, a3, a1
; RV64-NEXT: li a3, 2
; RV64-NEXT: vs8r.v v16, (a1)
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv8r.v v8, v0
; RV64-NEXT: vmv8r.v v16, v24
; RV64-NEXT: call ext2
@@ -451,6 +453,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_nxv32i32_
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 128
; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v16, v0
; RV32-NEXT: call ext3
; RV32-NEXT: addi sp, s0, -144
@@ -523,6 +526,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_nxv32i32_
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 128
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv8r.v v16, v0
; RV64-NEXT: call ext3
; RV64-NEXT: addi sp, s0, -144
diff --git a/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll b/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
index 9b27116fef7cae..8d56a76dc8eb80 100644
--- a/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
@@ -103,6 +103,7 @@ define target("riscv.vector.tuple", <vscale x 16 x i8>, 2) @caller_tuple_return(
; RV32-NEXT: sw ra, 12(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
; RV32-NEXT: call callee_tuple_return
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v6, v8
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: vmv2r.v v10, v6
@@ -119,6 +120,7 @@ define target("riscv.vector.tuple", <vscale x 16 x i8>, 2) @caller_tuple_return(
; RV64-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64-NEXT: .cfi_offset ra, -8
; RV64-NEXT: call callee_tuple_return
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v6, v8
; RV64-NEXT: vmv2r.v v8, v10
; RV64-NEXT: vmv2r.v v10, v6
@@ -144,6 +146,7 @@ define void @caller_tuple_argument(target("riscv.vector.tuple", <vscale x 16 x i
; RV32-NEXT: .cfi_def_cfa_offset 16
; RV32-NEXT: sw ra, 12(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v6, v8
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: vmv2r.v v10, v6
@@ -160,6 +163,7 @@ define void @caller_tuple_argument(target("riscv.vector.tuple", <vscale x 16 x i
; RV64-NEXT: .cfi_def_cfa_offset 16
; RV64-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64-NEXT: .cfi_offset ra, -8
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v6, v8
; RV64-NEXT: vmv2r.v v8, v10
; RV64-NEXT: vmv2r.v v10, v6
diff --git a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
index 7d0b0118a72725..5bad25aab7aa29 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
@@ -117,8 +117,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.ceil.nxv4bf16(<vscale x 4 x bfloat>, <vsc
define <vscale x 4 x bfloat> @vp_ceil_vv_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -169,8 +169,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.ceil.nxv8bf16(<vscale x 8 x bfloat>, <vsc
define <vscale x 8 x bfloat> @vp_ceil_vv_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -221,8 +221,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.ceil.nxv16bf16(<vscale x 16 x bfloat>, <
define <vscale x 16 x bfloat> @vp_ceil_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -279,9 +279,9 @@ define <vscale x 32 x bfloat> @vp_ceil_vv_nxv32bf16(<vscale x 32 x bfloat> %va,
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -582,8 +582,8 @@ define <vscale x 4 x half> @vp_ceil_vv_nxv4f16(<vscale x 4 x half> %va, <vscale
;
; ZVFHMIN-LABEL: vp_ceil_vv_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -649,6 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.ceil.nxv8f16(<vscale x 8 x half>, <vscale x
define <vscale x 8 x half> @vp_ceil_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -668,8 +669,8 @@ define <vscale x 8 x half> @vp_ceil_vv_nxv8f16(<vscale x 8 x half> %va, <vscale
;
; ZVFHMIN-LABEL: vp_ceil_vv_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -735,6 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.ceil.nxv16f16(<vscale x 16 x half>, <vscal
define <vscale x 16 x half> @vp_ceil_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -754,8 +756,8 @@ define <vscale x 16 x half> @vp_ceil_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vp_ceil_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -821,6 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.ceil.nxv32f16(<vscale x 32 x half>, <vscal
define <vscale x 32 x half> @vp_ceil_vv_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -846,9 +849,9 @@ define <vscale x 32 x half> @vp_ceil_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -1068,6 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.ceil.nxv4f32(<vscale x 4 x float>, <vscale
define <vscale x 4 x float> @vp_ceil_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1112,6 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.ceil.nxv8f32(<vscale x 8 x float>, <vscale
define <vscale x 8 x float> @vp_ceil_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1156,6 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.ceil.nxv16f32(<vscale x 16 x float>, <vsc
define <vscale x 16 x float> @vp_ceil_vv_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1242,6 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.ceil.nxv2f64(<vscale x 2 x double>, <vsca
define <vscale x 2 x double> @vp_ceil_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1286,6 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.ceil.nxv4f64(<vscale x 4 x double>, <vsca
define <vscale x 4 x double> @vp_ceil_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1330,6 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.ceil.nxv7f64(<vscale x 7 x double>, <vsca
define <vscale x 7 x double> @vp_ceil_vv_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1374,6 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.ceil.nxv8f64(<vscale x 8 x double>, <vsca
define <vscale x 8 x double> @vp_ceil_vv_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1425,13 +1435,13 @@ define <vscale x 16 x double> @vp_ceil_vv_nxv16f64(<vscale x 16 x double> %va, <
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/compressstore.ll b/llvm/test/CodeGen/RISCV/rvv/compressstore.ll
index a407cd048ffe3f..08bf82d4bc796a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/compressstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/compressstore.ll
@@ -197,9 +197,9 @@ entry:
define void @test_compresstore_v256i8(ptr %p, <256 x i1> %mask, <256 x i8> %data) {
; RV64-LABEL: test_compresstore_v256i8:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV64-NEXT: vmv1r.v v7, v8
; RV64-NEXT: li a2, 128
-; RV64-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV64-NEXT: vslidedown.vi v9, v0, 1
; RV64-NEXT: vmv.x.s a3, v0
; RV64-NEXT: vsetvli zero, a2, e8, m8, ta, ma
@@ -230,9 +230,9 @@ define void @test_compresstore_v256i8(ptr %p, <256 x i1> %mask, <256 x i8> %data
; RV32-NEXT: slli a2, a2, 3
; RV32-NEXT: sub sp, sp, a2
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; RV32-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v16
; RV32-NEXT: li a2, 128
-; RV32-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV32-NEXT: vslidedown.vi v9, v0, 1
; RV32-NEXT: li a3, 32
; RV32-NEXT: vmv.x.s a4, v0
diff --git a/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll b/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll
index ad176df71397e6..f6c26bbba89fe5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll
@@ -18,11 +18,11 @@
define void @constant_folding_crash(ptr %v54, <4 x ptr> %lanes.a, <4 x ptr> %lanes.b, <4 x i1> %sel) {
; RV32-LABEL: constant_folding_crash:
; RV32: # %bb.0: # %entry
+; RV32-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; RV32-NEXT: vmv1r.v v10, v0
; RV32-NEXT: lw a0, 8(a0)
; RV32-NEXT: andi a0, a0, 1
; RV32-NEXT: seqz a0, a0
-; RV32-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; RV32-NEXT: vmv.v.x v11, a0
; RV32-NEXT: vmsne.vi v0, v11, 0
; RV32-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -43,11 +43,11 @@ define void @constant_folding_crash(ptr %v54, <4 x ptr> %lanes.a, <4 x ptr> %lan
;
; RV64-LABEL: constant_folding_crash:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: ld a0, 8(a0)
; RV64-NEXT: andi a0, a0, 1
; RV64-NEXT: seqz a0, a0
-; RV64-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; RV64-NEXT: vmv.v.x v13, a0
; RV64-NEXT: vmsne.vi v0, v13, 0
; RV64-NEXT: vsetvli zero, zero, e64, m2, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/ctlz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ctlz-vp.ll
index f56a792fdef6a8..ce4bc48dff0426 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ctlz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ctlz-vp.ll
@@ -1235,13 +1235,13 @@ declare <vscale x 16 x i64> @llvm.vp.ctlz.nxv16i64(<vscale x 16 x i64>, i1 immar
define <vscale x 16 x i64> @vp_ctlz_nxv16i64(<vscale x 16 x i64> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ctlz_nxv16i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: fsrmi a4, 1
; CHECK-NEXT: li a2, 52
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: sub a5, a0, a1
-; CHECK-NEXT: vsetvli a6, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sltu a3, a0, a5
; CHECK-NEXT: addi a3, a3, -1
@@ -1270,11 +1270,11 @@ define <vscale x 16 x i64> @vp_ctlz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
;
; CHECK-ZVBB-LABEL: vp_ctlz_nxv16i64:
; CHECK-ZVBB: # %bb.0:
+; CHECK-ZVBB-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vmv1r.v v24, v0
; CHECK-ZVBB-NEXT: csrr a1, vlenb
; CHECK-ZVBB-NEXT: srli a2, a1, 3
; CHECK-ZVBB-NEXT: sub a3, a0, a1
-; CHECK-ZVBB-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVBB-NEXT: sltu a2, a0, a3
; CHECK-ZVBB-NEXT: addi a2, a2, -1
@@ -2465,12 +2465,12 @@ define <vscale x 8 x i64> @vp_ctlz_zero_undef_nxv8i64_unmasked(<vscale x 8 x i64
define <vscale x 16 x i64> @vp_ctlz_zero_undef_nxv16i64(<vscale x 16 x i64> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ctlz_zero_undef_nxv16i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: fsrmi a3, 1
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a4, a0, a1
-; CHECK-NEXT: vsetvli a5, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a4
; CHECK-NEXT: addi a2, a2, -1
@@ -2497,11 +2497,11 @@ define <vscale x 16 x i64> @vp_ctlz_zero_undef_nxv16i64(<vscale x 16 x i64> %va,
;
; CHECK-ZVBB-LABEL: vp_ctlz_zero_undef_nxv16i64:
; CHECK-ZVBB: # %bb.0:
+; CHECK-ZVBB-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vmv1r.v v24, v0
; CHECK-ZVBB-NEXT: csrr a1, vlenb
; CHECK-ZVBB-NEXT: srli a2, a1, 3
; CHECK-ZVBB-NEXT: sub a3, a0, a1
-; CHECK-ZVBB-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVBB-NEXT: sltu a2, a0, a3
; CHECK-ZVBB-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
index 9e75dc9dccffde..52ddd9ab2f8329 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
@@ -2022,6 +2022,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x30, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 48 * vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; RV32-NEXT: vmv1r.v v7, v0
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: li a2, 24
@@ -2032,7 +2033,6 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: lui a2, 349525
; RV32-NEXT: srli a3, a1, 3
-; RV32-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; RV32-NEXT: vslidedown.vx v0, v0, a3
; RV32-NEXT: sub a3, a0, a1
; RV32-NEXT: addi a2, a2, 1365
@@ -2294,11 +2294,11 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
;
; CHECK-ZVBB-LABEL: vp_ctpop_nxv16i64:
; CHECK-ZVBB: # %bb.0:
+; CHECK-ZVBB-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vmv1r.v v24, v0
; CHECK-ZVBB-NEXT: csrr a1, vlenb
; CHECK-ZVBB-NEXT: srli a2, a1, 3
; CHECK-ZVBB-NEXT: sub a3, a0, a1
-; CHECK-ZVBB-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVBB-NEXT: sltu a2, a0, a3
; CHECK-ZVBB-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
index 9e6295b6644171..766717d92a7493 100644
--- a/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
@@ -2246,6 +2246,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x38, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 56 * vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; RV32-NEXT: vmv1r.v v24, v0
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: slli a1, a1, 5
@@ -2256,7 +2257,6 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32-NEXT: li a2, 1
; RV32-NEXT: srli a3, a1, 3
; RV32-NEXT: sub a4, a0, a1
-; RV32-NEXT: vsetvli a5, zero, e8, mf4, ta, ma
; RV32-NEXT: vslidedown.vx v0, v0, a3
; RV32-NEXT: sltu a3, a0, a4
; RV32-NEXT: addi a3, a3, -1
@@ -2499,6 +2499,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; RV64-NEXT: vmv1r.v v24, v0
; RV64-NEXT: csrr a1, vlenb
; RV64-NEXT: slli a1, a1, 3
@@ -2517,7 +2518,6 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64-NEXT: addiw a4, a4, 819
; RV64-NEXT: addiw a5, a5, -241
; RV64-NEXT: addiw t1, a6, 257
-; RV64-NEXT: vsetvli a6, zero, e8, mf4, ta, ma
; RV64-NEXT: vslidedown.vx v0, v0, a7
; RV64-NEXT: slli a7, a3, 32
; RV64-NEXT: add a7, a3, a7
@@ -2586,11 +2586,11 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
;
; CHECK-ZVBB-LABEL: vp_cttz_nxv16i64:
; CHECK-ZVBB: # %bb.0:
+; CHECK-ZVBB-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vmv1r.v v24, v0
; CHECK-ZVBB-NEXT: csrr a1, vlenb
; CHECK-ZVBB-NEXT: srli a2, a1, 3
; CHECK-ZVBB-NEXT: sub a3, a0, a1
-; CHECK-ZVBB-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVBB-NEXT: sltu a2, a0, a3
; CHECK-ZVBB-NEXT: addi a2, a2, -1
@@ -4002,6 +4002,7 @@ define <vscale x 16 x i64> @vp_cttz_zero_undef_nxv16i64(<vscale x 16 x i64> %va,
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -4012,7 +4013,6 @@ define <vscale x 16 x i64> @vp_cttz_zero_undef_nxv16i64(<vscale x 16 x i64> %va,
; CHECK-NEXT: fsrmi a3, 1
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a4, a0, a1
-; CHECK-NEXT: vsetvli a5, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a4
; CHECK-NEXT: addi a2, a2, -1
@@ -4057,11 +4057,11 @@ define <vscale x 16 x i64> @vp_cttz_zero_undef_nxv16i64(<vscale x 16 x i64> %va,
;
; CHECK-ZVBB-LABEL: vp_cttz_zero_undef_nxv16i64:
; CHECK-ZVBB: # %bb.0:
+; CHECK-ZVBB-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vmv1r.v v24, v0
; CHECK-ZVBB-NEXT: csrr a1, vlenb
; CHECK-ZVBB-NEXT: srli a2, a1, 3
; CHECK-ZVBB-NEXT: sub a3, a0, a1
-; CHECK-ZVBB-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-ZVBB-NEXT: vslidedown.vx v0, v0, a2
; CHECK-ZVBB-NEXT: sltu a2, a0, a3
; CHECK-ZVBB-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/expandload.ll b/llvm/test/CodeGen/RISCV/rvv/expandload.ll
index b32d85bb1943a5..13a7e444b77f3b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/expandload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/expandload.ll
@@ -227,9 +227,9 @@ define <256 x i8> @test_expandload_v256i8(ptr %base, <256 x i1> %mask, <256 x i8
; CHECK-RV32-NEXT: add a2, sp, a2
; CHECK-RV32-NEXT: addi a2, a2, 16
; CHECK-RV32-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; CHECK-RV32-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v7, v8
; CHECK-RV32-NEXT: li a2, 128
-; CHECK-RV32-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; CHECK-RV32-NEXT: vslidedown.vi v9, v0, 1
; CHECK-RV32-NEXT: li a3, 32
; CHECK-RV32-NEXT: vmv.x.s a4, v0
@@ -338,9 +338,9 @@ define <256 x i8> @test_expandload_v256i8(ptr %base, <256 x i1> %mask, <256 x i8
; CHECK-RV64-NEXT: add a2, sp, a2
; CHECK-RV64-NEXT: addi a2, a2, 16
; CHECK-RV64-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; CHECK-RV64-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v7, v8
; CHECK-RV64-NEXT: li a2, 128
-; CHECK-RV64-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; CHECK-RV64-NEXT: vslidedown.vi v9, v0, 1
; CHECK-RV64-NEXT: vmv.x.s a3, v0
; CHECK-RV64-NEXT: vsetvli zero, a2, e8, m8, ta, ma
@@ -1626,8 +1626,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a1, .LBB61_30
; CHECK-RV32-NEXT: .LBB61_29: # %cond.load109
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 29, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 28
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -1639,8 +1639,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a2, .LBB61_32
; CHECK-RV32-NEXT: # %bb.31: # %cond.load113
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 30, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a2
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 29
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -1787,6 +1787,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_65: # %cond.load241
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -1940,6 +1941,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_99: # %cond.load369
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -2093,6 +2095,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_133: # %cond.load497
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -2246,6 +2249,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_167: # %cond.load625
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -2399,6 +2403,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_201: # %cond.load753
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -2552,6 +2557,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_235: # %cond.load881
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -2705,6 +2711,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_269: # %cond.load1009
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -3907,10 +3914,9 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_2
; CHECK-RV32-NEXT: .LBB61_545: # %cond.load1
; CHECK-RV32-NEXT: lbu a1, 0(a0)
+; CHECK-RV32-NEXT: vsetivli zero, 2, e8, m1, tu, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, zero, e8, mf8, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a1
-; CHECK-RV32-NEXT: vsetivli zero, 2, e8, m1, tu, ma
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 1
; CHECK-RV32-NEXT: addi a0, a0, 1
; CHECK-RV32-NEXT: vmv1r.v v16, v8
@@ -3920,8 +3926,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_3
; CHECK-RV32-NEXT: .LBB61_546: # %cond.load5
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 3, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 2
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -3932,8 +3938,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_4
; CHECK-RV32-NEXT: .LBB61_547: # %cond.load9
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 4, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 3
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -3944,8 +3950,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_5
; CHECK-RV32-NEXT: .LBB61_548: # %cond.load13
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 5, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 4
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -3956,8 +3962,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_6
; CHECK-RV32-NEXT: .LBB61_549: # %cond.load17
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 6, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 5
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -3968,8 +3974,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_7
; CHECK-RV32-NEXT: .LBB61_550: # %cond.load21
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 7, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 6
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -3980,8 +3986,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_8
; CHECK-RV32-NEXT: .LBB61_551: # %cond.load25
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 8, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 7
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -3992,8 +3998,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_9
; CHECK-RV32-NEXT: .LBB61_552: # %cond.load29
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 9, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 8
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4004,8 +4010,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_10
; CHECK-RV32-NEXT: .LBB61_553: # %cond.load33
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 10, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 9
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4016,8 +4022,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_11
; CHECK-RV32-NEXT: .LBB61_554: # %cond.load37
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 11, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 10
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4028,8 +4034,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_12
; CHECK-RV32-NEXT: .LBB61_555: # %cond.load41
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 12, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 11
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4040,8 +4046,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_13
; CHECK-RV32-NEXT: .LBB61_556: # %cond.load45
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 13, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 12
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4052,8 +4058,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_14
; CHECK-RV32-NEXT: .LBB61_557: # %cond.load49
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 14, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 13
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4064,8 +4070,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_15
; CHECK-RV32-NEXT: .LBB61_558: # %cond.load53
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 15, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 14
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4076,8 +4082,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_16
; CHECK-RV32-NEXT: .LBB61_559: # %cond.load57
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 16, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 15
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4088,8 +4094,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_17
; CHECK-RV32-NEXT: .LBB61_560: # %cond.load61
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 17, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 16
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4100,8 +4106,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_18
; CHECK-RV32-NEXT: .LBB61_561: # %cond.load65
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 18, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 17
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4112,8 +4118,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_19
; CHECK-RV32-NEXT: .LBB61_562: # %cond.load69
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 19, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 18
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4124,8 +4130,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_20
; CHECK-RV32-NEXT: .LBB61_563: # %cond.load73
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 20, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 19
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4136,8 +4142,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_21
; CHECK-RV32-NEXT: .LBB61_564: # %cond.load77
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 21, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 20
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4148,8 +4154,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_22
; CHECK-RV32-NEXT: .LBB61_565: # %cond.load81
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 22, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 21
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4160,8 +4166,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_23
; CHECK-RV32-NEXT: .LBB61_566: # %cond.load85
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 23, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 22
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4172,8 +4178,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_24
; CHECK-RV32-NEXT: .LBB61_567: # %cond.load89
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 24, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 23
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4184,8 +4190,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_25
; CHECK-RV32-NEXT: .LBB61_568: # %cond.load93
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 25, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 24
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4196,8 +4202,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_26
; CHECK-RV32-NEXT: .LBB61_569: # %cond.load97
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 26, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 25
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4208,8 +4214,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_27
; CHECK-RV32-NEXT: .LBB61_570: # %cond.load101
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 27, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 26
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4220,8 +4226,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_28
; CHECK-RV32-NEXT: .LBB61_571: # %cond.load105
; CHECK-RV32-NEXT: lbu a1, 0(a0)
-; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetivli zero, 28, e8, m1, tu, ma
+; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vmv.s.x v9, a1
; CHECK-RV32-NEXT: vslideup.vi v8, v9, 27
; CHECK-RV32-NEXT: addi a0, a0, 1
@@ -4248,6 +4254,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_573: # %cond.load125
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4264,6 +4271,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_574: # %cond.load129
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4280,6 +4288,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_575: # %cond.load133
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4296,6 +4305,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_576: # %cond.load137
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4312,6 +4322,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_577: # %cond.load141
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4328,6 +4339,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_578: # %cond.load145
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4344,6 +4356,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_579: # %cond.load149
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4360,6 +4373,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_580: # %cond.load153
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4376,6 +4390,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_581: # %cond.load157
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4392,6 +4407,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_582: # %cond.load161
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4408,6 +4424,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_583: # %cond.load165
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4424,6 +4441,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_584: # %cond.load169
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4440,6 +4458,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_585: # %cond.load173
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4456,6 +4475,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_586: # %cond.load177
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4472,6 +4492,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_587: # %cond.load181
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4488,6 +4509,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_588: # %cond.load185
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4504,6 +4526,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_589: # %cond.load189
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4520,6 +4543,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_590: # %cond.load193
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4536,6 +4560,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_591: # %cond.load197
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4552,6 +4577,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_592: # %cond.load201
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4568,6 +4594,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_593: # %cond.load205
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4584,6 +4611,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_594: # %cond.load209
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4600,6 +4628,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_595: # %cond.load213
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4616,6 +4645,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_596: # %cond.load217
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4632,6 +4662,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_597: # %cond.load221
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4648,6 +4679,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_598: # %cond.load225
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4664,6 +4696,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_599: # %cond.load229
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4680,6 +4713,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_600: # %cond.load233
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4696,6 +4730,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_601: # %cond.load237
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
@@ -4728,6 +4763,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_603: # %cond.load253
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4744,6 +4780,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_604: # %cond.load257
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4760,6 +4797,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_605: # %cond.load261
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4776,6 +4814,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_606: # %cond.load265
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4792,6 +4831,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_607: # %cond.load269
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4808,6 +4848,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_608: # %cond.load273
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4824,6 +4865,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_609: # %cond.load277
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4840,6 +4882,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_610: # %cond.load281
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4856,6 +4899,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_611: # %cond.load285
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4872,6 +4916,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_612: # %cond.load289
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4888,6 +4933,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_613: # %cond.load293
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4904,6 +4950,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_614: # %cond.load297
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4920,6 +4967,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_615: # %cond.load301
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4936,6 +4984,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_616: # %cond.load305
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4952,6 +5001,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_617: # %cond.load309
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4968,6 +5018,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_618: # %cond.load313
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -4984,6 +5035,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_619: # %cond.load317
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5000,6 +5052,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_620: # %cond.load321
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5016,6 +5069,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_621: # %cond.load325
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5032,6 +5086,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_622: # %cond.load329
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5048,6 +5103,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_623: # %cond.load333
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5064,6 +5120,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_624: # %cond.load337
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5080,6 +5137,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_625: # %cond.load341
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5096,6 +5154,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_626: # %cond.load345
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5112,6 +5171,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_627: # %cond.load349
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5128,6 +5188,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_628: # %cond.load353
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5144,6 +5205,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_629: # %cond.load357
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5160,6 +5222,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_630: # %cond.load361
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5176,6 +5239,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_631: # %cond.load365
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
@@ -5208,6 +5272,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_633: # %cond.load381
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5224,6 +5289,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_634: # %cond.load385
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5240,6 +5306,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_635: # %cond.load389
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5256,6 +5323,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_636: # %cond.load393
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5272,6 +5340,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_637: # %cond.load397
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5288,6 +5357,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_638: # %cond.load401
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5304,6 +5374,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_639: # %cond.load405
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5320,6 +5391,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_640: # %cond.load409
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5336,6 +5408,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_641: # %cond.load413
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5352,6 +5425,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_642: # %cond.load417
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5368,6 +5442,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_643: # %cond.load421
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5384,6 +5459,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_644: # %cond.load425
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5400,6 +5476,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_645: # %cond.load429
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5416,6 +5493,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_646: # %cond.load433
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5432,6 +5510,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_647: # %cond.load437
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5448,6 +5527,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_648: # %cond.load441
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5464,6 +5544,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_649: # %cond.load445
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5480,6 +5561,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_650: # %cond.load449
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5496,6 +5578,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_651: # %cond.load453
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5512,6 +5595,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_652: # %cond.load457
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5528,6 +5612,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_653: # %cond.load461
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5544,6 +5629,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_654: # %cond.load465
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5560,6 +5646,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_655: # %cond.load469
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5576,6 +5663,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_656: # %cond.load473
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5592,6 +5680,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_657: # %cond.load477
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5608,6 +5697,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_658: # %cond.load481
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5624,6 +5714,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_659: # %cond.load485
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5640,6 +5731,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_660: # %cond.load489
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5656,6 +5748,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_661: # %cond.load493
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
@@ -5688,6 +5781,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_663: # %cond.load509
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5704,6 +5798,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_664: # %cond.load513
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5720,6 +5815,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_665: # %cond.load517
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5736,6 +5832,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_666: # %cond.load521
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5752,6 +5849,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_667: # %cond.load525
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5768,6 +5866,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_668: # %cond.load529
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5784,6 +5883,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_669: # %cond.load533
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5800,6 +5900,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_670: # %cond.load537
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5816,6 +5917,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_671: # %cond.load541
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5832,6 +5934,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_672: # %cond.load545
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5848,6 +5951,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_673: # %cond.load549
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5864,6 +5968,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_674: # %cond.load553
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5880,6 +5985,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_675: # %cond.load557
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5896,6 +6002,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_676: # %cond.load561
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5912,6 +6019,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_677: # %cond.load565
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5928,6 +6036,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_678: # %cond.load569
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5944,6 +6053,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_679: # %cond.load573
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5960,6 +6070,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_680: # %cond.load577
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5976,6 +6087,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_681: # %cond.load581
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -5992,6 +6104,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_682: # %cond.load585
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6008,6 +6121,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_683: # %cond.load589
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6024,6 +6138,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_684: # %cond.load593
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6040,6 +6155,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_685: # %cond.load597
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6056,6 +6172,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_686: # %cond.load601
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6072,6 +6189,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_687: # %cond.load605
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6088,6 +6206,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_688: # %cond.load609
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6104,6 +6223,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_689: # %cond.load613
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6120,6 +6240,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_690: # %cond.load617
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6136,6 +6257,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_691: # %cond.load621
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6168,6 +6290,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_693: # %cond.load637
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6184,6 +6307,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_694: # %cond.load641
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6200,6 +6324,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_695: # %cond.load645
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6216,6 +6341,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_696: # %cond.load649
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6232,6 +6358,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_697: # %cond.load653
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6248,6 +6375,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_698: # %cond.load657
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6264,6 +6392,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_699: # %cond.load661
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6280,6 +6409,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_700: # %cond.load665
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6296,6 +6426,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_701: # %cond.load669
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6312,6 +6443,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_702: # %cond.load673
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6328,6 +6460,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_703: # %cond.load677
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6344,6 +6477,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_704: # %cond.load681
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6360,6 +6494,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_705: # %cond.load685
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6376,6 +6511,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_706: # %cond.load689
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6392,6 +6528,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_707: # %cond.load693
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6408,6 +6545,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_708: # %cond.load697
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6424,6 +6562,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_709: # %cond.load701
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6440,6 +6579,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_710: # %cond.load705
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6456,6 +6596,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_711: # %cond.load709
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6472,6 +6613,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_712: # %cond.load713
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6488,6 +6630,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_713: # %cond.load717
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6504,6 +6647,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_714: # %cond.load721
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6520,6 +6664,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_715: # %cond.load725
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6536,6 +6681,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_716: # %cond.load729
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6552,6 +6698,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_717: # %cond.load733
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6568,6 +6715,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_718: # %cond.load737
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6584,6 +6732,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_719: # %cond.load741
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6600,6 +6749,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_720: # %cond.load745
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6616,6 +6766,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_721: # %cond.load749
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -6648,6 +6799,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_723: # %cond.load765
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6664,6 +6816,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_724: # %cond.load769
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6680,6 +6833,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_725: # %cond.load773
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6696,6 +6850,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_726: # %cond.load777
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6712,6 +6867,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_727: # %cond.load781
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6728,6 +6884,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_728: # %cond.load785
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6744,6 +6901,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_729: # %cond.load789
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6760,6 +6918,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_730: # %cond.load793
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6776,6 +6935,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_731: # %cond.load797
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6792,6 +6952,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_732: # %cond.load801
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6808,6 +6969,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_733: # %cond.load805
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6824,6 +6986,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_734: # %cond.load809
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6840,6 +7003,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_735: # %cond.load813
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6856,6 +7020,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_736: # %cond.load817
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6872,6 +7037,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_737: # %cond.load821
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6888,6 +7054,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_738: # %cond.load825
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6904,6 +7071,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_739: # %cond.load829
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6920,6 +7088,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_740: # %cond.load833
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6936,6 +7105,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_741: # %cond.load837
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6952,6 +7122,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_742: # %cond.load841
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6968,6 +7139,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_743: # %cond.load845
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -6984,6 +7156,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_744: # %cond.load849
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7000,6 +7173,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_745: # %cond.load853
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7016,6 +7190,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_746: # %cond.load857
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7032,6 +7207,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_747: # %cond.load861
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7048,6 +7224,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_748: # %cond.load865
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7064,6 +7241,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_749: # %cond.load869
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7080,6 +7258,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_750: # %cond.load873
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7096,6 +7275,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_751: # %cond.load877
; CHECK-RV32-NEXT: lbu a2, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
@@ -7128,6 +7308,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_753: # %cond.load893
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7144,6 +7325,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_754: # %cond.load897
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7160,6 +7342,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_755: # %cond.load901
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7176,6 +7359,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_756: # %cond.load905
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7192,6 +7376,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_757: # %cond.load909
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7208,6 +7393,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_758: # %cond.load913
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7224,6 +7410,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_759: # %cond.load917
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7240,6 +7427,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_760: # %cond.load921
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7256,6 +7444,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_761: # %cond.load925
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7272,6 +7461,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_762: # %cond.load929
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7288,6 +7478,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_763: # %cond.load933
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7304,6 +7495,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_764: # %cond.load937
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7320,6 +7512,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_765: # %cond.load941
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7336,6 +7529,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_766: # %cond.load945
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7352,6 +7546,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_767: # %cond.load949
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7368,6 +7563,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_768: # %cond.load953
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7384,6 +7580,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_769: # %cond.load957
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7400,6 +7597,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_770: # %cond.load961
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7416,6 +7614,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_771: # %cond.load965
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7432,6 +7631,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_772: # %cond.load969
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7448,6 +7648,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_773: # %cond.load973
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7464,6 +7665,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_774: # %cond.load977
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7480,6 +7682,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_775: # %cond.load981
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7496,6 +7699,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_776: # %cond.load985
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7512,6 +7716,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_777: # %cond.load989
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7528,6 +7733,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_778: # %cond.load993
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7544,6 +7750,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_779: # %cond.load997
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7560,6 +7767,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_780: # %cond.load1001
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -7576,6 +7784,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: .LBB61_781: # %cond.load1005
; CHECK-RV32-NEXT: lbu a3, 0(a0)
; CHECK-RV32-NEXT: li a4, 512
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
@@ -10999,6 +11208,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_62: # %cond.load241
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -11280,6 +11490,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_128: # %cond.load497
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -11561,6 +11772,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_194: # %cond.load753
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -11842,6 +12054,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_260: # %cond.load1009
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -12968,10 +13181,9 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_2
; CHECK-RV64-NEXT: .LBB61_528: # %cond.load1
; CHECK-RV64-NEXT: lbu a1, 0(a0)
+; CHECK-RV64-NEXT: vsetivli zero, 2, e8, m1, tu, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, zero, e8, mf8, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
-; CHECK-RV64-NEXT: vsetivli zero, 2, e8, m1, tu, ma
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 1
; CHECK-RV64-NEXT: addi a0, a0, 1
; CHECK-RV64-NEXT: vmv1r.v v16, v8
@@ -12981,8 +13193,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_3
; CHECK-RV64-NEXT: .LBB61_529: # %cond.load5
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 3, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 2
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -12993,8 +13205,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_4
; CHECK-RV64-NEXT: .LBB61_530: # %cond.load9
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 4, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 3
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13005,8 +13217,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_5
; CHECK-RV64-NEXT: .LBB61_531: # %cond.load13
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 5, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 4
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13017,8 +13229,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_6
; CHECK-RV64-NEXT: .LBB61_532: # %cond.load17
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 6, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 5
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13029,8 +13241,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_7
; CHECK-RV64-NEXT: .LBB61_533: # %cond.load21
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 7, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 6
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13041,8 +13253,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_8
; CHECK-RV64-NEXT: .LBB61_534: # %cond.load25
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 8, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 7
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13053,8 +13265,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_9
; CHECK-RV64-NEXT: .LBB61_535: # %cond.load29
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 9, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 8
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13065,8 +13277,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_10
; CHECK-RV64-NEXT: .LBB61_536: # %cond.load33
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 10, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 9
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13077,8 +13289,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_11
; CHECK-RV64-NEXT: .LBB61_537: # %cond.load37
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 11, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 10
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13089,8 +13301,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_12
; CHECK-RV64-NEXT: .LBB61_538: # %cond.load41
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 12, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 11
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13101,8 +13313,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_13
; CHECK-RV64-NEXT: .LBB61_539: # %cond.load45
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 13, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 12
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13113,8 +13325,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_14
; CHECK-RV64-NEXT: .LBB61_540: # %cond.load49
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 14, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 13
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13125,8 +13337,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_15
; CHECK-RV64-NEXT: .LBB61_541: # %cond.load53
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 15, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 14
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13137,8 +13349,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_16
; CHECK-RV64-NEXT: .LBB61_542: # %cond.load57
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 16, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 15
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13149,8 +13361,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_17
; CHECK-RV64-NEXT: .LBB61_543: # %cond.load61
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 17, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 16
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13161,8 +13373,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_18
; CHECK-RV64-NEXT: .LBB61_544: # %cond.load65
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 18, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 17
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13173,8 +13385,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_19
; CHECK-RV64-NEXT: .LBB61_545: # %cond.load69
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 19, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 18
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13185,8 +13397,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_20
; CHECK-RV64-NEXT: .LBB61_546: # %cond.load73
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 20, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 19
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13197,8 +13409,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_21
; CHECK-RV64-NEXT: .LBB61_547: # %cond.load77
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 21, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 20
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13209,8 +13421,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_22
; CHECK-RV64-NEXT: .LBB61_548: # %cond.load81
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 22, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 21
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13221,8 +13433,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_23
; CHECK-RV64-NEXT: .LBB61_549: # %cond.load85
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 23, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 22
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13233,8 +13445,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_24
; CHECK-RV64-NEXT: .LBB61_550: # %cond.load89
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 24, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 23
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13245,8 +13457,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_25
; CHECK-RV64-NEXT: .LBB61_551: # %cond.load93
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 25, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 24
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13257,8 +13469,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_26
; CHECK-RV64-NEXT: .LBB61_552: # %cond.load97
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 26, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 25
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13269,8 +13481,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_27
; CHECK-RV64-NEXT: .LBB61_553: # %cond.load101
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 27, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 26
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13281,8 +13493,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_28
; CHECK-RV64-NEXT: .LBB61_554: # %cond.load105
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 28, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 27
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13293,8 +13505,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_29
; CHECK-RV64-NEXT: .LBB61_555: # %cond.load109
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 29, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 28
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13305,8 +13517,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_30
; CHECK-RV64-NEXT: .LBB61_556: # %cond.load113
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 30, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 29
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13317,8 +13529,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_31
; CHECK-RV64-NEXT: .LBB61_557: # %cond.load117
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetivli zero, 31, e8, m1, tu, ma
+; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: vslideup.vi v8, v9, 30
; CHECK-RV64-NEXT: addi a0, a0, 1
@@ -13330,6 +13542,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_558: # %cond.load121
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13345,6 +13558,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_559: # %cond.load125
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13361,6 +13575,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_560: # %cond.load129
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13377,6 +13592,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_561: # %cond.load133
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13393,6 +13609,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_562: # %cond.load137
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13409,6 +13626,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_563: # %cond.load141
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13425,6 +13643,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_564: # %cond.load145
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13441,6 +13660,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_565: # %cond.load149
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13457,6 +13677,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_566: # %cond.load153
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13473,6 +13694,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_567: # %cond.load157
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13489,6 +13711,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_568: # %cond.load161
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13505,6 +13728,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_569: # %cond.load165
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13521,6 +13745,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_570: # %cond.load169
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13537,6 +13762,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_571: # %cond.load173
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13553,6 +13779,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_572: # %cond.load177
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13569,6 +13796,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_573: # %cond.load181
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13585,6 +13813,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_574: # %cond.load185
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13601,6 +13830,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_575: # %cond.load189
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13617,6 +13847,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_576: # %cond.load193
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13633,6 +13864,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_577: # %cond.load197
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13649,6 +13881,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_578: # %cond.load201
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13665,6 +13898,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_579: # %cond.load205
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13681,6 +13915,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_580: # %cond.load209
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13697,6 +13932,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_581: # %cond.load213
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13713,6 +13949,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_582: # %cond.load217
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13729,6 +13966,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_583: # %cond.load221
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13745,6 +13983,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_584: # %cond.load225
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13761,6 +14000,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_585: # %cond.load229
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13777,6 +14017,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_586: # %cond.load233
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13793,6 +14034,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_587: # %cond.load237
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
@@ -13825,6 +14067,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_589: # %cond.load253
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13841,6 +14084,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_590: # %cond.load257
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13857,6 +14101,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_591: # %cond.load261
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13873,6 +14118,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_592: # %cond.load265
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13889,6 +14135,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_593: # %cond.load269
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13905,6 +14152,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_594: # %cond.load273
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13921,6 +14169,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_595: # %cond.load277
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13937,6 +14186,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_596: # %cond.load281
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13953,6 +14203,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_597: # %cond.load285
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13969,6 +14220,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_598: # %cond.load289
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -13985,6 +14237,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_599: # %cond.load293
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14001,6 +14254,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_600: # %cond.load297
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14017,6 +14271,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_601: # %cond.load301
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14033,6 +14288,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_602: # %cond.load305
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14049,6 +14305,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_603: # %cond.load309
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14065,6 +14322,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_604: # %cond.load313
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14081,6 +14339,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_605: # %cond.load317
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14097,6 +14356,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_606: # %cond.load321
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14113,6 +14373,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_607: # %cond.load325
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14129,6 +14390,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_608: # %cond.load329
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14145,6 +14407,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_609: # %cond.load333
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14161,6 +14424,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_610: # %cond.load337
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14177,6 +14441,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_611: # %cond.load341
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14193,6 +14458,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_612: # %cond.load345
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14209,6 +14475,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_613: # %cond.load349
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14225,6 +14492,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_614: # %cond.load353
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14241,6 +14509,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_615: # %cond.load357
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14257,6 +14526,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_616: # %cond.load361
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14273,6 +14543,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_617: # %cond.load365
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14289,6 +14560,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_618: # %cond.load369
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14305,6 +14577,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_619: # %cond.load373
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14321,6 +14594,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_620: # %cond.load377
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14337,6 +14611,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_621: # %cond.load381
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14353,6 +14628,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_622: # %cond.load385
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14369,6 +14645,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_623: # %cond.load389
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14385,6 +14662,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_624: # %cond.load393
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14401,6 +14679,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_625: # %cond.load397
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14417,6 +14696,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_626: # %cond.load401
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14433,6 +14713,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_627: # %cond.load405
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14449,6 +14730,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_628: # %cond.load409
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14465,6 +14747,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_629: # %cond.load413
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14481,6 +14764,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_630: # %cond.load417
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14497,6 +14781,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_631: # %cond.load421
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14513,6 +14798,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_632: # %cond.load425
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14529,6 +14815,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_633: # %cond.load429
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14545,6 +14832,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_634: # %cond.load433
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14561,6 +14849,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_635: # %cond.load437
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14577,6 +14866,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_636: # %cond.load441
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14593,6 +14883,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_637: # %cond.load445
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14609,6 +14900,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_638: # %cond.load449
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14625,6 +14917,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_639: # %cond.load453
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14641,6 +14934,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_640: # %cond.load457
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14657,6 +14951,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_641: # %cond.load461
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14673,6 +14968,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_642: # %cond.load465
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14689,6 +14985,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_643: # %cond.load469
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14705,6 +15002,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_644: # %cond.load473
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14721,6 +15019,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_645: # %cond.load477
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14737,6 +15036,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_646: # %cond.load481
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14753,6 +15053,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_647: # %cond.load485
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14769,6 +15070,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_648: # %cond.load489
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14785,6 +15087,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_649: # %cond.load493
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
@@ -14817,6 +15120,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_651: # %cond.load509
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14833,6 +15137,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_652: # %cond.load513
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14849,6 +15154,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_653: # %cond.load517
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14865,6 +15171,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_654: # %cond.load521
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14881,6 +15188,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_655: # %cond.load525
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14897,6 +15205,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_656: # %cond.load529
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14913,6 +15222,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_657: # %cond.load533
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14929,6 +15239,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_658: # %cond.load537
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14945,6 +15256,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_659: # %cond.load541
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14961,6 +15273,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_660: # %cond.load545
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14977,6 +15290,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_661: # %cond.load549
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -14993,6 +15307,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_662: # %cond.load553
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15009,6 +15324,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_663: # %cond.load557
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15025,6 +15341,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_664: # %cond.load561
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15041,6 +15358,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_665: # %cond.load565
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15057,6 +15375,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_666: # %cond.load569
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15073,6 +15392,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_667: # %cond.load573
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15089,6 +15409,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_668: # %cond.load577
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15105,6 +15426,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_669: # %cond.load581
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15121,6 +15443,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_670: # %cond.load585
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15137,6 +15460,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_671: # %cond.load589
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15153,6 +15477,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_672: # %cond.load593
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15169,6 +15494,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_673: # %cond.load597
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15185,6 +15511,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_674: # %cond.load601
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15201,6 +15528,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_675: # %cond.load605
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15217,6 +15545,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_676: # %cond.load609
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15233,6 +15562,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_677: # %cond.load613
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15249,6 +15579,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_678: # %cond.load617
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15265,6 +15596,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_679: # %cond.load621
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15281,6 +15613,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_680: # %cond.load625
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15297,6 +15630,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_681: # %cond.load629
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15313,6 +15647,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_682: # %cond.load633
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15329,6 +15664,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_683: # %cond.load637
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15345,6 +15681,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_684: # %cond.load641
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15361,6 +15698,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_685: # %cond.load645
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15377,6 +15715,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_686: # %cond.load649
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15393,6 +15732,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_687: # %cond.load653
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15409,6 +15749,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_688: # %cond.load657
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15425,6 +15766,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_689: # %cond.load661
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15441,6 +15783,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_690: # %cond.load665
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15457,6 +15800,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_691: # %cond.load669
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15473,6 +15817,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_692: # %cond.load673
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15489,6 +15834,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_693: # %cond.load677
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15505,6 +15851,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_694: # %cond.load681
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15521,6 +15868,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_695: # %cond.load685
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15537,6 +15885,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_696: # %cond.load689
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15553,6 +15902,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_697: # %cond.load693
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15569,6 +15919,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_698: # %cond.load697
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15585,6 +15936,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_699: # %cond.load701
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15601,6 +15953,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_700: # %cond.load705
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15617,6 +15970,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_701: # %cond.load709
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15633,6 +15987,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_702: # %cond.load713
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15649,6 +16004,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_703: # %cond.load717
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15665,6 +16021,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_704: # %cond.load721
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15681,6 +16038,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_705: # %cond.load725
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15697,6 +16055,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_706: # %cond.load729
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15713,6 +16072,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_707: # %cond.load733
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15729,6 +16089,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_708: # %cond.load737
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15745,6 +16106,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_709: # %cond.load741
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15761,6 +16123,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_710: # %cond.load745
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15777,6 +16140,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_711: # %cond.load749
; CHECK-RV64-NEXT: lbu a1, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
@@ -15809,6 +16173,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_713: # %cond.load765
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15825,6 +16190,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_714: # %cond.load769
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15841,6 +16207,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_715: # %cond.load773
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15857,6 +16224,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_716: # %cond.load777
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15873,6 +16241,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_717: # %cond.load781
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15889,6 +16258,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_718: # %cond.load785
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15905,6 +16275,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_719: # %cond.load789
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15921,6 +16292,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_720: # %cond.load793
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15937,6 +16309,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_721: # %cond.load797
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15953,6 +16326,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_722: # %cond.load801
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15969,6 +16343,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_723: # %cond.load805
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -15985,6 +16360,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_724: # %cond.load809
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16001,6 +16377,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_725: # %cond.load813
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16017,6 +16394,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_726: # %cond.load817
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16033,6 +16411,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_727: # %cond.load821
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16049,6 +16428,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_728: # %cond.load825
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16065,6 +16445,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_729: # %cond.load829
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16081,6 +16462,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_730: # %cond.load833
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16097,6 +16479,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_731: # %cond.load837
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16113,6 +16496,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_732: # %cond.load841
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16129,6 +16513,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_733: # %cond.load845
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16145,6 +16530,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_734: # %cond.load849
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16161,6 +16547,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_735: # %cond.load853
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16177,6 +16564,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_736: # %cond.load857
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16193,6 +16581,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_737: # %cond.load861
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16209,6 +16598,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_738: # %cond.load865
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16225,6 +16615,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_739: # %cond.load869
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16241,6 +16632,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_740: # %cond.load873
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16257,6 +16649,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_741: # %cond.load877
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16273,6 +16666,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_742: # %cond.load881
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16289,6 +16683,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_743: # %cond.load885
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16305,6 +16700,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_744: # %cond.load889
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16321,6 +16717,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_745: # %cond.load893
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16337,6 +16734,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_746: # %cond.load897
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16353,6 +16751,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_747: # %cond.load901
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16369,6 +16768,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_748: # %cond.load905
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16385,6 +16785,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_749: # %cond.load909
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16401,6 +16802,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_750: # %cond.load913
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16417,6 +16819,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_751: # %cond.load917
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16433,6 +16836,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_752: # %cond.load921
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16449,6 +16853,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_753: # %cond.load925
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16465,6 +16870,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_754: # %cond.load929
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16481,6 +16887,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_755: # %cond.load933
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16497,6 +16904,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_756: # %cond.load937
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16513,6 +16921,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_757: # %cond.load941
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16529,6 +16938,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_758: # %cond.load945
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16545,6 +16955,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_759: # %cond.load949
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16561,6 +16972,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_760: # %cond.load953
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16577,6 +16989,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_761: # %cond.load957
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16593,6 +17006,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_762: # %cond.load961
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16609,6 +17023,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_763: # %cond.load965
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16625,6 +17040,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_764: # %cond.load969
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16641,6 +17057,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_765: # %cond.load973
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16657,6 +17074,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_766: # %cond.load977
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16673,6 +17091,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_767: # %cond.load981
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16689,6 +17108,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_768: # %cond.load985
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16705,6 +17125,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_769: # %cond.load989
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16721,6 +17142,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_770: # %cond.load993
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16737,6 +17159,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_771: # %cond.load997
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16753,6 +17176,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_772: # %cond.load1001
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
@@ -16769,6 +17193,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: .LBB61_773: # %cond.load1005
; CHECK-RV64-NEXT: lbu a2, 0(a0)
; CHECK-RV64-NEXT: li a3, 512
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll
index 869478a1efa78d..5a974f88ed8f38 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll
@@ -13,6 +13,7 @@ define <vscale x 4 x i32> @extract_nxv8i32_nxv4i32_0(<vscale x 8 x i32> %vec) {
define <vscale x 4 x i32> @extract_nxv8i32_nxv4i32_4(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv4i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv8i32(<vscale x 8 x i32> %vec, i64 4)
@@ -30,6 +31,7 @@ define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_0(<vscale x 8 x i32> %vec) {
define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv2i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, i64 2)
@@ -39,6 +41,7 @@ define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec) {
define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv2i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, i64 4)
@@ -48,6 +51,7 @@ define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec) {
define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_6(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv2i32_6:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v11
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, i64 6)
@@ -65,6 +69,7 @@ define <vscale x 8 x i32> @extract_nxv16i32_nxv8i32_0(<vscale x 16 x i32> %vec)
define <vscale x 8 x i32> @extract_nxv16i32_nxv8i32_8(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv8i32_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: ret
%c = call <vscale x 8 x i32> @llvm.vector.extract.nxv8i32.nxv16i32(<vscale x 16 x i32> %vec, i64 8)
@@ -82,6 +87,7 @@ define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_0(<vscale x 16 x i32> %vec)
define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv4i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, i64 4)
@@ -91,6 +97,7 @@ define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec)
define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv4i32_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v12
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, i64 8)
@@ -100,6 +107,7 @@ define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec)
define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_12(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv4i32_12:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v14
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, i64 12)
@@ -117,6 +125,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_0(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 2)
@@ -126,6 +135,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 4)
@@ -135,6 +145,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_6:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v11
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 6)
@@ -144,6 +155,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v12
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 8)
@@ -153,6 +165,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_10:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v13
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 10)
@@ -162,6 +175,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_12:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v14
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 12)
@@ -171,6 +185,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_14(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_14:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v15
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 14)
@@ -224,6 +239,7 @@ define <vscale x 1 x i32> @extract_nxv16i32_nxv1i32_15(<vscale x 16 x i32> %vec)
define <vscale x 1 x i32> @extract_nxv16i32_nxv1i32_2(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv1i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 1 x i32> @llvm.vector.extract.nxv1i32.nxv16i32(<vscale x 16 x i32> %vec, i64 2)
@@ -287,6 +303,7 @@ define <vscale x 2 x i8> @extract_nxv32i8_nxv2i8_6(<vscale x 32 x i8> %vec) {
define <vscale x 2 x i8> @extract_nxv32i8_nxv2i8_8(<vscale x 32 x i8> %vec) {
; CHECK-LABEL: extract_nxv32i8_nxv2i8_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x i8> @llvm.vector.extract.nxv2i8.nxv32i8(<vscale x 32 x i8> %vec, i64 8)
@@ -357,6 +374,7 @@ define <vscale x 2 x half> @extract_nxv2f16_nxv16f16_2(<vscale x 16 x half> %vec
define <vscale x 2 x half> @extract_nxv2f16_nxv16f16_4(<vscale x 16 x half> %vec) {
; CHECK-LABEL: extract_nxv2f16_nxv16f16_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x half> @llvm.vector.extract.nxv2f16.nxv16f16(<vscale x 16 x half> %vec, i64 4)
@@ -504,6 +522,7 @@ define <vscale x 2 x bfloat> @extract_nxv2bf16_nxv16bf16_2(<vscale x 16 x bfloat
define <vscale x 2 x bfloat> @extract_nxv2bf16_nxv16bf16_4(<vscale x 16 x bfloat> %vec) {
; CHECK-LABEL: extract_nxv2bf16_nxv16bf16_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x bfloat> @llvm.vector.extract.nxv2bf16.nxv16bf16(<vscale x 16 x bfloat> %vec, i64 4)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll
index ce83e2d8a62206..1a5ca429b531fa 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll
@@ -16,10 +16,10 @@ define <512 x i8> @single_source(<512 x i8> %a) {
; CHECK-NEXT: addi s0, sp, 1536
; CHECK-NEXT: .cfi_def_cfa s0, 0
; CHECK-NEXT: andi sp, sp, -512
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: li a0, 512
; CHECK-NEXT: addi a1, sp, 512
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.x.s a2, v16
; CHECK-NEXT: vslidedown.vi v24, v16, 5
; CHECK-NEXT: li a3, 432
@@ -104,10 +104,10 @@ define <512 x i8> @two_source(<512 x i8> %a, <512 x i8> %b) {
; CHECK-NEXT: addi s0, sp, 1536
; CHECK-NEXT: .cfi_def_cfa s0, 0
; CHECK-NEXT: andi sp, sp, -512
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a0, 512
; CHECK-NEXT: addi a1, sp, 512
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vi v0, v24, 5
; CHECK-NEXT: vmv.x.s a2, v24
; CHECK-NEXT: li a3, 432
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
index 3eb5d36b4896a7..226cce4dbaf09f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
@@ -1659,6 +1659,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v8
; RV32-NEXT: lui a2, 1044480
; RV32-NEXT: lui a3, 61681
@@ -2055,6 +2056,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v8
; RV32-NEXT: lui a2, 1044480
; RV32-NEXT: lui a3, 61681
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll
index ee953a66a004f3..e2ce999eb0f77f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll
@@ -180,6 +180,7 @@ define fastcc <32 x i32> @ret_v32i32_call_v32i32_v32i32_i32(<32 x i32> %x, <32 x
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; CHECK-NEXT: .cfi_offset ra, -8
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a1, 2
; CHECK-NEXT: vmv8r.v v8, v16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll
index 73e148edbe2d67..d83540281b415b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll
@@ -180,6 +180,7 @@ define <32 x i32> @ret_v32i32_call_v32i32_v32i32_i32(<32 x i32> %x, <32 x i32> %
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; CHECK-NEXT: .cfi_offset ra, -8
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a1, 2
; CHECK-NEXT: vmv8r.v v8, v16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
index 511242aa677c2a..d40c672cdb6a1c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
@@ -194,8 +194,8 @@ define <8 x half> @vp_ceil_v8f16(<8 x half> %va, <8 x i1> %m, i32 zeroext %evl)
;
; ZVFHMIN-LABEL: vp_ceil_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -261,6 +261,7 @@ declare <16 x half> @llvm.vp.ceil.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_ceil_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -280,8 +281,8 @@ define <16 x half> @vp_ceil_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %e
;
; ZVFHMIN-LABEL: vp_ceil_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -431,6 +432,7 @@ declare <8 x float> @llvm.vp.ceil.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_ceil_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -475,6 +477,7 @@ declare <16 x float> @llvm.vp.ceil.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_ceil_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -561,6 +564,7 @@ declare <4 x double> @llvm.vp.ceil.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_ceil_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -605,6 +609,7 @@ declare <8 x double> @llvm.vp.ceil.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_ceil_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -649,6 +654,7 @@ declare <15 x double> @llvm.vp.ceil.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_ceil_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -693,6 +699,7 @@ declare <16 x double> @llvm.vp.ceil.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_ceil_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -743,6 +750,7 @@ define <32 x double> @vp_ceil_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroex
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -750,7 +758,6 @@ define <32 x double> @vp_ceil_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroex
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v24, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
index 5e73e6df9170c2..bbf97891fd9b10 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
@@ -1796,6 +1796,7 @@ define <32 x i64> @vp_ctpop_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v16
; RV32-NEXT: lui a1, 349525
; RV32-NEXT: lui a2, 209715
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
index 02e99ea513e69b..9bce8d4960e89f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
@@ -194,8 +194,8 @@ define <8 x half> @vp_floor_v8f16(<8 x half> %va, <8 x i1> %m, i32 zeroext %evl)
;
; ZVFHMIN-LABEL: vp_floor_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -261,6 +261,7 @@ declare <16 x half> @llvm.vp.floor.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_floor_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -280,8 +281,8 @@ define <16 x half> @vp_floor_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %
;
; ZVFHMIN-LABEL: vp_floor_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -431,6 +432,7 @@ declare <8 x float> @llvm.vp.floor.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_floor_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -475,6 +477,7 @@ declare <16 x float> @llvm.vp.floor.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_floor_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -561,6 +564,7 @@ declare <4 x double> @llvm.vp.floor.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_floor_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -605,6 +609,7 @@ declare <8 x double> @llvm.vp.floor.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_floor_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -649,6 +654,7 @@ declare <15 x double> @llvm.vp.floor.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_floor_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -693,6 +699,7 @@ declare <16 x double> @llvm.vp.floor.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_floor_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -743,6 +750,7 @@ define <32 x double> @vp_floor_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -750,7 +758,6 @@ define <32 x double> @vp_floor_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v24, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
index f43934afc370df..ef6a429d391a62 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
@@ -13,6 +13,7 @@ declare <2 x half> @llvm.vp.maximum.v2f16(<2 x half>, <2 x half>, <2 x i1>, i32)
define <2 x half> @vfmax_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v2f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -26,8 +27,8 @@ define <2 x half> @vfmax_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i
;
; ZVFHMIN-LABEL: vfmax_vv_v2f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 2, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -83,6 +84,7 @@ declare <4 x half> @llvm.vp.maximum.v4f16(<4 x half>, <4 x half>, <4 x i1>, i32)
define <4 x half> @vfmax_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v4f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -96,8 +98,8 @@ define <4 x half> @vfmax_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i
;
; ZVFHMIN-LABEL: vfmax_vv_v4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -153,6 +155,7 @@ declare <8 x half> @llvm.vp.maximum.v8f16(<8 x half>, <8 x half>, <8 x i1>, i32)
define <8 x half> @vfmax_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -166,8 +169,8 @@ define <8 x half> @vfmax_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i
;
; ZVFHMIN-LABEL: vfmax_vv_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v12, v12, v0.t
@@ -225,6 +228,7 @@ declare <16 x half> @llvm.vp.maximum.v16f16(<16 x half>, <16 x half>, <16 x i1>,
define <16 x half> @vfmax_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -240,8 +244,8 @@ define <16 x half> @vfmax_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1>
;
; ZVFHMIN-LABEL: vfmax_vv_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v16, v16, v0.t
@@ -299,6 +303,7 @@ declare <2 x float> @llvm.vp.maximum.v2f32(<2 x float>, <2 x float>, <2 x i1>, i
define <2 x float> @vfmax_vv_v2f32(<2 x float> %va, <2 x float> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v2f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -332,6 +337,7 @@ declare <4 x float> @llvm.vp.maximum.v4f32(<4 x float>, <4 x float>, <4 x i1>, i
define <4 x float> @vfmax_vv_v4f32(<4 x float> %va, <4 x float> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -365,6 +371,7 @@ declare <8 x float> @llvm.vp.maximum.v8f32(<8 x float>, <8 x float>, <8 x i1>, i
define <8 x float> @vfmax_vv_v8f32(<8 x float> %va, <8 x float> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -400,6 +407,7 @@ declare <16 x float> @llvm.vp.maximum.v16f32(<16 x float>, <16 x float>, <16 x i
define <16 x float> @vfmax_vv_v16f32(<16 x float> %va, <16 x float> %vb, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -435,6 +443,7 @@ declare <2 x double> @llvm.vp.maximum.v2f64(<2 x double>, <2 x double>, <2 x i1>
define <2 x double> @vfmax_vv_v2f64(<2 x double> %va, <2 x double> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -468,6 +477,7 @@ declare <4 x double> @llvm.vp.maximum.v4f64(<4 x double>, <4 x double>, <4 x i1>
define <4 x double> @vfmax_vv_v4f64(<4 x double> %va, <4 x double> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -503,6 +513,7 @@ declare <8 x double> @llvm.vp.maximum.v8f64(<8 x double>, <8 x double>, <8 x i1>
define <8 x double> @vfmax_vv_v8f64(<8 x double> %va, <8 x double> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -544,6 +555,7 @@ define <16 x double> @vfmax_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -595,6 +607,7 @@ define <32 x double> @vfmax_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 4
@@ -608,7 +621,6 @@ define <32 x double> @vfmax_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: addi a1, a0, 128
-; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; CHECK-NEXT: vle64.v v16, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
index 7067cc21ab56d5..ca6b86f8325c4a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
@@ -13,6 +13,7 @@ declare <2 x half> @llvm.vp.minimum.v2f16(<2 x half>, <2 x half>, <2 x i1>, i32)
define <2 x half> @vfmin_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v2f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -26,8 +27,8 @@ define <2 x half> @vfmin_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i
;
; ZVFHMIN-LABEL: vfmin_vv_v2f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 2, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -83,6 +84,7 @@ declare <4 x half> @llvm.vp.minimum.v4f16(<4 x half>, <4 x half>, <4 x i1>, i32)
define <4 x half> @vfmin_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v4f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -96,8 +98,8 @@ define <4 x half> @vfmin_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i
;
; ZVFHMIN-LABEL: vfmin_vv_v4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -153,6 +155,7 @@ declare <8 x half> @llvm.vp.minimum.v8f16(<8 x half>, <8 x half>, <8 x i1>, i32)
define <8 x half> @vfmin_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -166,8 +169,8 @@ define <8 x half> @vfmin_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i
;
; ZVFHMIN-LABEL: vfmin_vv_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v12, v12, v0.t
@@ -225,6 +228,7 @@ declare <16 x half> @llvm.vp.minimum.v16f16(<16 x half>, <16 x half>, <16 x i1>,
define <16 x half> @vfmin_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -240,8 +244,8 @@ define <16 x half> @vfmin_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1>
;
; ZVFHMIN-LABEL: vfmin_vv_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v16, v16, v0.t
@@ -299,6 +303,7 @@ declare <2 x float> @llvm.vp.minimum.v2f32(<2 x float>, <2 x float>, <2 x i1>, i
define <2 x float> @vfmin_vv_v2f32(<2 x float> %va, <2 x float> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v2f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -332,6 +337,7 @@ declare <4 x float> @llvm.vp.minimum.v4f32(<4 x float>, <4 x float>, <4 x i1>, i
define <4 x float> @vfmin_vv_v4f32(<4 x float> %va, <4 x float> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -365,6 +371,7 @@ declare <8 x float> @llvm.vp.minimum.v8f32(<8 x float>, <8 x float>, <8 x i1>, i
define <8 x float> @vfmin_vv_v8f32(<8 x float> %va, <8 x float> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -400,6 +407,7 @@ declare <16 x float> @llvm.vp.minimum.v16f32(<16 x float>, <16 x float>, <16 x i
define <16 x float> @vfmin_vv_v16f32(<16 x float> %va, <16 x float> %vb, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -435,6 +443,7 @@ declare <2 x double> @llvm.vp.minimum.v2f64(<2 x double>, <2 x double>, <2 x i1>
define <2 x double> @vfmin_vv_v2f64(<2 x double> %va, <2 x double> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -468,6 +477,7 @@ declare <4 x double> @llvm.vp.minimum.v4f64(<4 x double>, <4 x double>, <4 x i1>
define <4 x double> @vfmin_vv_v4f64(<4 x double> %va, <4 x double> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -503,6 +513,7 @@ declare <8 x double> @llvm.vp.minimum.v8f64(<8 x double>, <8 x double>, <8 x i1>
define <8 x double> @vfmin_vv_v8f64(<8 x double> %va, <8 x double> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -544,6 +555,7 @@ define <16 x double> @vfmin_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -595,6 +607,7 @@ define <32 x double> @vfmin_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 4
@@ -608,7 +621,6 @@ define <32 x double> @vfmin_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: addi a1, a0, 128
-; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; CHECK-NEXT: vle64.v v16, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fptrunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fptrunc-vp.ll
index e64c7c87132eee..582706e4dfa184 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fptrunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fptrunc-vp.ll
@@ -97,9 +97,9 @@ declare <32 x float> @llvm.vp.fptrunc.v32f64.v32f32(<32 x double>, <32 x i1>, i3
define <32 x float> @vfptrunc_v32f32_v32f64(<32 x double> %a, <32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vfptrunc_v32f32_v32f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v12, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB7_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll
index a68dc11f3d21e7..fc7cd94ca3de85 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll
@@ -712,8 +712,8 @@ define <16 x i64> @fshl_v16i64(<16 x i64> %a, <16 x i64> %b, <16 x i64> %c, <16
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
+; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vle64.v v24, (a0)
; CHECK-NEXT: li a0, 63
; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
index 1fbc8dfd688c4b..9023d9732ef182 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
@@ -133,6 +133,7 @@ define <vscale x 2 x i32> @insert_nxv8i32_v4i32_0(<vscale x 2 x i32> %vec, <4 x
;
; VLS-LABEL: insert_nxv8i32_v4i32_0:
; VLS: # %bb.0:
+; VLS-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; VLS-NEXT: vmv1r.v v8, v9
; VLS-NEXT: ret
%v = call <vscale x 2 x i32> @llvm.vector.insert.nxv2i32.v4i32(<vscale x 2 x i32> %vec, <4 x i32> %subvec, i64 0)
@@ -143,6 +144,7 @@ define <vscale x 2 x i32> @insert_nxv8i32_v4i32_0(<vscale x 2 x i32> %vec, <4 x
define <4 x i32> @insert_v4i32_v4i32_0(<4 x i32> %vec, <4 x i32> %subvec) {
; CHECK-LABEL: insert_v4i32_v4i32_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%v = call <4 x i32> @llvm.vector.insert.v4i32.v4i32(<4 x i32> %vec, <4 x i32> %subvec, i64 0)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
index 6cc3f7e76797bd..722a0aff8d9e44 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
@@ -556,11 +556,13 @@ define <4 x i8> @mgather_truemask_v4i8(<4 x ptr> %ptrs, <4 x i8> %passthru) {
define <4 x i8> @mgather_falsemask_v4i8(<4 x ptr> %ptrs, <4 x i8> %passthru) {
; RV32-LABEL: mgather_falsemask_v4i8:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i8:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -733,13 +735,13 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB12_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB12_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB12_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB12_15
; RV64ZVE32F-NEXT: .LBB12_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB12_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB12_16
; RV64ZVE32F-NEXT: .LBB12_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB12_9
@@ -756,14 +758,31 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB12_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB12_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lbu a2, 0(a2)
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e8, mf2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB12_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB12_16
-; RV64ZVE32F-NEXT: .LBB12_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB12_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lbu a0, 0(a0)
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB12_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB12_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB12_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: add a2, a0, a2
; RV64ZVE32F-NEXT: lbu a2, 0(a2)
@@ -772,7 +791,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB12_6
-; RV64ZVE32F-NEXT: .LBB12_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB12_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -783,7 +802,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB12_7
-; RV64ZVE32F-NEXT: .LBB12_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB12_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 5, e8, mf2, tu, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -793,26 +812,6 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB12_8
; RV64ZVE32F-NEXT: j .LBB12_9
-; RV64ZVE32F-NEXT: .LBB12_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lbu a2, 0(a2)
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e8, mf2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB12_11
-; RV64ZVE32F-NEXT: .LBB12_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lbu a0, 0(a0)
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds i8, ptr %base, <8 x i8> %idxs
%v = call <8 x i8> @llvm.masked.gather.v8i8.v8p0(<8 x ptr> %ptrs, i32 1, <8 x i1> %m, <8 x i8> %passthru)
ret <8 x i8> %v
@@ -1253,11 +1252,13 @@ define <4 x i16> @mgather_truemask_v4i16(<4 x ptr> %ptrs, <4 x i16> %passthru) {
define <4 x i16> @mgather_falsemask_v4i16(<4 x ptr> %ptrs, <4 x i16> %passthru) {
; RV32-LABEL: mgather_falsemask_v4i16:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i16:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -1435,13 +1436,13 @@ define <8 x i16> @mgather_baseidx_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB23_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB23_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB23_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB23_15
; RV64ZVE32F-NEXT: .LBB23_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB23_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB23_16
; RV64ZVE32F-NEXT: .LBB23_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB23_9
@@ -1460,14 +1461,35 @@ define <8 x i16> @mgather_baseidx_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB23_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB23_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB23_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB23_16
-; RV64ZVE32F-NEXT: .LBB23_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB23_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB23_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB23_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB23_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 1
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -1478,7 +1500,7 @@ define <8 x i16> @mgather_baseidx_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB23_6
-; RV64ZVE32F-NEXT: .LBB23_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB23_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -1491,7 +1513,7 @@ define <8 x i16> @mgather_baseidx_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB23_7
-; RV64ZVE32F-NEXT: .LBB23_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB23_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -1504,30 +1526,6 @@ define <8 x i16> @mgather_baseidx_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB23_8
; RV64ZVE32F-NEXT: j .LBB23_9
-; RV64ZVE32F-NEXT: .LBB23_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB23_11
-; RV64ZVE32F-NEXT: .LBB23_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds i16, ptr %base, <8 x i8> %idxs
%v = call <8 x i16> @llvm.masked.gather.v8i16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x i16> %passthru)
ret <8 x i16> %v
@@ -1587,13 +1585,13 @@ define <8 x i16> @mgather_baseidx_sext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB24_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB24_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB24_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB24_15
; RV64ZVE32F-NEXT: .LBB24_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB24_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB24_16
; RV64ZVE32F-NEXT: .LBB24_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB24_9
@@ -1612,14 +1610,35 @@ define <8 x i16> @mgather_baseidx_sext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB24_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB24_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB24_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB24_16
-; RV64ZVE32F-NEXT: .LBB24_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB24_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB24_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB24_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB24_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 1
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -1630,7 +1649,7 @@ define <8 x i16> @mgather_baseidx_sext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB24_6
-; RV64ZVE32F-NEXT: .LBB24_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB24_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -1643,7 +1662,7 @@ define <8 x i16> @mgather_baseidx_sext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB24_7
-; RV64ZVE32F-NEXT: .LBB24_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB24_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -1656,30 +1675,6 @@ define <8 x i16> @mgather_baseidx_sext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB24_8
; RV64ZVE32F-NEXT: j .LBB24_9
-; RV64ZVE32F-NEXT: .LBB24_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB24_11
-; RV64ZVE32F-NEXT: .LBB24_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%eidxs = sext <8 x i8> %idxs to <8 x i16>
%ptrs = getelementptr inbounds i16, ptr %base, <8 x i16> %eidxs
%v = call <8 x i16> @llvm.masked.gather.v8i16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x i16> %passthru)
@@ -1740,13 +1735,13 @@ define <8 x i16> @mgather_baseidx_zext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB25_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB25_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB25_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB25_15
; RV64ZVE32F-NEXT: .LBB25_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB25_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB25_16
; RV64ZVE32F-NEXT: .LBB25_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB25_9
@@ -1766,14 +1761,37 @@ define <8 x i16> @mgather_baseidx_zext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB25_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB25_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: andi a2, a2, 255
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB25_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB25_16
-; RV64ZVE32F-NEXT: .LBB25_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB25_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: andi a1, a1, 255
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB25_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB25_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB25_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: andi a2, a2, 255
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -1785,7 +1803,7 @@ define <8 x i16> @mgather_baseidx_zext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB25_6
-; RV64ZVE32F-NEXT: .LBB25_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB25_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -1799,7 +1817,7 @@ define <8 x i16> @mgather_baseidx_zext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB25_7
-; RV64ZVE32F-NEXT: .LBB25_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB25_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: andi a2, a2, 255
@@ -1813,32 +1831,6 @@ define <8 x i16> @mgather_baseidx_zext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB25_8
; RV64ZVE32F-NEXT: j .LBB25_9
-; RV64ZVE32F-NEXT: .LBB25_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: andi a2, a2, 255
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB25_11
-; RV64ZVE32F-NEXT: .LBB25_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: andi a1, a1, 255
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%eidxs = zext <8 x i8> %idxs to <8 x i16>
%ptrs = getelementptr inbounds i16, ptr %base, <8 x i16> %eidxs
%v = call <8 x i16> @llvm.masked.gather.v8i16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x i16> %passthru)
@@ -1896,13 +1888,13 @@ define <8 x i16> @mgather_baseidx_v8i16(ptr %base, <8 x i16> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB26_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB26_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB26_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB26_15
; RV64ZVE32F-NEXT: .LBB26_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB26_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB26_16
; RV64ZVE32F-NEXT: .LBB26_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB26_9
@@ -1920,14 +1912,33 @@ define <8 x i16> @mgather_baseidx_v8i16(ptr %base, <8 x i16> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB26_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB26_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB26_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB26_16
-; RV64ZVE32F-NEXT: .LBB26_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB26_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB26_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB26_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB26_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 1
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -1937,7 +1948,7 @@ define <8 x i16> @mgather_baseidx_v8i16(ptr %base, <8 x i16> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB26_6
-; RV64ZVE32F-NEXT: .LBB26_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB26_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -1949,7 +1960,7 @@ define <8 x i16> @mgather_baseidx_v8i16(ptr %base, <8 x i16> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB26_7
-; RV64ZVE32F-NEXT: .LBB26_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB26_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 5, e16, m1, tu, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -1960,28 +1971,6 @@ define <8 x i16> @mgather_baseidx_v8i16(ptr %base, <8 x i16> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB26_8
; RV64ZVE32F-NEXT: j .LBB26_9
-; RV64ZVE32F-NEXT: .LBB26_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB26_11
-; RV64ZVE32F-NEXT: .LBB26_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds i16, ptr %base, <8 x i16> %idxs
%v = call <8 x i16> @llvm.masked.gather.v8i16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x i16> %passthru)
ret <8 x i16> %v
@@ -2311,11 +2300,13 @@ define <4 x i32> @mgather_truemask_v4i32(<4 x ptr> %ptrs, <4 x i32> %passthru) {
define <4 x i32> @mgather_falsemask_v4i32(<4 x ptr> %ptrs, <4 x i32> %passthru) {
; RV32-LABEL: mgather_falsemask_v4i32:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i32:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -2492,13 +2483,13 @@ define <8 x i32> @mgather_baseidx_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB35_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB35_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB35_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB35_15
; RV64ZVE32F-NEXT: .LBB35_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB35_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB35_16
; RV64ZVE32F-NEXT: .LBB35_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB35_9
@@ -2517,14 +2508,35 @@ define <8 x i32> @mgather_baseidx_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB35_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB35_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lw a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v12, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB35_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB35_16
-; RV64ZVE32F-NEXT: .LBB35_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB35_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB35_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB35_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB35_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -2535,7 +2547,7 @@ define <8 x i32> @mgather_baseidx_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB35_6
-; RV64ZVE32F-NEXT: .LBB35_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB35_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -2548,7 +2560,7 @@ define <8 x i32> @mgather_baseidx_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB35_7
-; RV64ZVE32F-NEXT: .LBB35_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB35_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -2561,30 +2573,6 @@ define <8 x i32> @mgather_baseidx_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB35_8
; RV64ZVE32F-NEXT: j .LBB35_9
-; RV64ZVE32F-NEXT: .LBB35_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v12, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB35_11
-; RV64ZVE32F-NEXT: .LBB35_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i8> %idxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
ret <8 x i32> %v
@@ -2643,13 +2631,13 @@ define <8 x i32> @mgather_baseidx_sext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB36_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB36_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB36_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB36_15
; RV64ZVE32F-NEXT: .LBB36_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB36_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB36_16
; RV64ZVE32F-NEXT: .LBB36_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB36_9
@@ -2668,14 +2656,35 @@ define <8 x i32> @mgather_baseidx_sext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB36_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB36_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lw a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v12, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB36_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB36_16
-; RV64ZVE32F-NEXT: .LBB36_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB36_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB36_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB36_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB36_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -2686,7 +2695,7 @@ define <8 x i32> @mgather_baseidx_sext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB36_6
-; RV64ZVE32F-NEXT: .LBB36_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB36_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -2699,7 +2708,7 @@ define <8 x i32> @mgather_baseidx_sext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB36_7
-; RV64ZVE32F-NEXT: .LBB36_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB36_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -2712,30 +2721,6 @@ define <8 x i32> @mgather_baseidx_sext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB36_8
; RV64ZVE32F-NEXT: j .LBB36_9
-; RV64ZVE32F-NEXT: .LBB36_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v12, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB36_11
-; RV64ZVE32F-NEXT: .LBB36_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = sext <8 x i8> %idxs to <8 x i32>
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i32> %eidxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
@@ -2798,13 +2783,13 @@ define <8 x i32> @mgather_baseidx_zext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB37_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB37_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB37_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB37_15
; RV64ZVE32F-NEXT: .LBB37_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB37_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB37_16
; RV64ZVE32F-NEXT: .LBB37_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB37_9
@@ -2824,14 +2809,37 @@ define <8 x i32> @mgather_baseidx_zext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB37_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB37_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: andi a2, a2, 255
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lw a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v12, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB37_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB37_16
-; RV64ZVE32F-NEXT: .LBB37_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB37_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: andi a1, a1, 255
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB37_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB37_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB37_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: andi a2, a2, 255
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -2843,7 +2851,7 @@ define <8 x i32> @mgather_baseidx_zext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB37_6
-; RV64ZVE32F-NEXT: .LBB37_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB37_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -2857,7 +2865,7 @@ define <8 x i32> @mgather_baseidx_zext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB37_7
-; RV64ZVE32F-NEXT: .LBB37_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB37_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: andi a2, a2, 255
@@ -2871,32 +2879,6 @@ define <8 x i32> @mgather_baseidx_zext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB37_8
; RV64ZVE32F-NEXT: j .LBB37_9
-; RV64ZVE32F-NEXT: .LBB37_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: andi a2, a2, 255
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v12, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB37_11
-; RV64ZVE32F-NEXT: .LBB37_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: andi a1, a1, 255
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = zext <8 x i8> %idxs to <8 x i32>
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i32> %eidxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
@@ -2957,13 +2939,13 @@ define <8 x i32> @mgather_baseidx_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <8 x i
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB38_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB38_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB38_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB38_15
; RV64ZVE32F-NEXT: .LBB38_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB38_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB38_16
; RV64ZVE32F-NEXT: .LBB38_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB38_9
@@ -2982,14 +2964,35 @@ define <8 x i32> @mgather_baseidx_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <8 x i
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB38_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB38_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lw a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v12, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB38_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB38_16
-; RV64ZVE32F-NEXT: .LBB38_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB38_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB38_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB38_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB38_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -3000,7 +3003,7 @@ define <8 x i32> @mgather_baseidx_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <8 x i
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB38_6
-; RV64ZVE32F-NEXT: .LBB38_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB38_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -3013,7 +3016,7 @@ define <8 x i32> @mgather_baseidx_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <8 x i
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB38_7
-; RV64ZVE32F-NEXT: .LBB38_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB38_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -3026,30 +3029,6 @@ define <8 x i32> @mgather_baseidx_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <8 x i
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB38_8
; RV64ZVE32F-NEXT: j .LBB38_9
-; RV64ZVE32F-NEXT: .LBB38_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v12, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB38_11
-; RV64ZVE32F-NEXT: .LBB38_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i16> %idxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
ret <8 x i32> %v
@@ -3109,13 +3088,13 @@ define <8 x i32> @mgather_baseidx_sext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB39_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB39_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB39_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB39_15
; RV64ZVE32F-NEXT: .LBB39_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB39_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB39_16
; RV64ZVE32F-NEXT: .LBB39_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB39_9
@@ -3134,14 +3113,35 @@ define <8 x i32> @mgather_baseidx_sext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB39_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB39_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lw a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v12, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB39_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB39_16
-; RV64ZVE32F-NEXT: .LBB39_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB39_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB39_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB39_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB39_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -3152,7 +3152,7 @@ define <8 x i32> @mgather_baseidx_sext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB39_6
-; RV64ZVE32F-NEXT: .LBB39_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB39_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -3165,7 +3165,7 @@ define <8 x i32> @mgather_baseidx_sext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB39_7
-; RV64ZVE32F-NEXT: .LBB39_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB39_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -3178,30 +3178,6 @@ define <8 x i32> @mgather_baseidx_sext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB39_8
; RV64ZVE32F-NEXT: j .LBB39_9
-; RV64ZVE32F-NEXT: .LBB39_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v12, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB39_11
-; RV64ZVE32F-NEXT: .LBB39_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = sext <8 x i16> %idxs to <8 x i32>
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i32> %eidxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
@@ -3265,13 +3241,13 @@ define <8 x i32> @mgather_baseidx_zext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: andi a3, a2, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a3, .LBB40_12
+; RV64ZVE32F-NEXT: bnez a3, .LBB40_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a3, a2, 8
-; RV64ZVE32F-NEXT: bnez a3, .LBB40_13
+; RV64ZVE32F-NEXT: bnez a3, .LBB40_15
; RV64ZVE32F-NEXT: .LBB40_6: # %else8
; RV64ZVE32F-NEXT: andi a3, a2, 16
-; RV64ZVE32F-NEXT: bnez a3, .LBB40_14
+; RV64ZVE32F-NEXT: bnez a3, .LBB40_16
; RV64ZVE32F-NEXT: .LBB40_7: # %else11
; RV64ZVE32F-NEXT: andi a3, a2, 32
; RV64ZVE32F-NEXT: beqz a3, .LBB40_9
@@ -3291,14 +3267,37 @@ define <8 x i32> @mgather_baseidx_zext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: andi a3, a2, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a3, .LBB40_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a3, .LBB40_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a3, v8
+; RV64ZVE32F-NEXT: and a3, a3, a1
+; RV64ZVE32F-NEXT: slli a3, a3, 2
+; RV64ZVE32F-NEXT: add a3, a0, a3
+; RV64ZVE32F-NEXT: lw a3, 0(a3)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v12, a3
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB40_11: # %else17
; RV64ZVE32F-NEXT: andi a2, a2, -128
-; RV64ZVE32F-NEXT: bnez a2, .LBB40_16
-; RV64ZVE32F-NEXT: .LBB40_11: # %else20
+; RV64ZVE32F-NEXT: beqz a2, .LBB40_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: and a1, a2, a1
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB40_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB40_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB40_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a3, v8
; RV64ZVE32F-NEXT: and a3, a3, a1
; RV64ZVE32F-NEXT: slli a3, a3, 2
@@ -3310,7 +3309,7 @@ define <8 x i32> @mgather_baseidx_zext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a3, a2, 8
; RV64ZVE32F-NEXT: beqz a3, .LBB40_6
-; RV64ZVE32F-NEXT: .LBB40_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB40_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a3, v8
@@ -3324,7 +3323,7 @@ define <8 x i32> @mgather_baseidx_zext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a3, a2, 16
; RV64ZVE32F-NEXT: beqz a3, .LBB40_7
-; RV64ZVE32F-NEXT: .LBB40_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB40_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a3, v9
; RV64ZVE32F-NEXT: and a3, a3, a1
@@ -3338,32 +3337,6 @@ define <8 x i32> @mgather_baseidx_zext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: andi a3, a2, 32
; RV64ZVE32F-NEXT: bnez a3, .LBB40_8
; RV64ZVE32F-NEXT: j .LBB40_9
-; RV64ZVE32F-NEXT: .LBB40_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a3, v8
-; RV64ZVE32F-NEXT: and a3, a3, a1
-; RV64ZVE32F-NEXT: slli a3, a3, 2
-; RV64ZVE32F-NEXT: add a3, a0, a3
-; RV64ZVE32F-NEXT: lw a3, 0(a3)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v12, a3
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a2, a2, -128
-; RV64ZVE32F-NEXT: beqz a2, .LBB40_11
-; RV64ZVE32F-NEXT: .LBB40_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: and a1, a2, a1
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = zext <8 x i16> %idxs to <8 x i32>
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i32> %eidxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
@@ -3420,13 +3393,13 @@ define <8 x i32> @mgather_baseidx_v8i32(ptr %base, <8 x i32> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB41_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB41_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB41_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB41_15
; RV64ZVE32F-NEXT: .LBB41_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB41_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB41_16
; RV64ZVE32F-NEXT: .LBB41_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB41_9
@@ -3444,35 +3417,54 @@ define <8 x i32> @mgather_baseidx_v8i32(ptr %base, <8 x i32> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v12, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB41_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB41_16
-; RV64ZVE32F-NEXT: .LBB41_11: # %else20
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB41_12: # %cond.load4
+; RV64ZVE32F-NEXT: beqz a2, .LBB41_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vmv.s.x v9, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 3, e32, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v9, 2
-; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: beqz a2, .LBB41_6
-; RV64ZVE32F-NEXT: .LBB41_13: # %cond.load7
-; RV64ZVE32F-NEXT: vsetivli zero, 4, e32, m1, tu, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: vmv.s.x v12, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB41_11: # %else17
+; RV64ZVE32F-NEXT: andi a1, a1, -128
+; RV64ZVE32F-NEXT: beqz a1, .LBB41_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lw a0, 0(a0)
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB41_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vmv2r.v v8, v10
+; RV64ZVE32F-NEXT: ret
+; RV64ZVE32F-NEXT: .LBB41_14: # %cond.load4
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lw a2, 0(a2)
+; RV64ZVE32F-NEXT: vmv.s.x v9, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 3, e32, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v9, 2
+; RV64ZVE32F-NEXT: andi a2, a1, 8
+; RV64ZVE32F-NEXT: beqz a2, .LBB41_6
+; RV64ZVE32F-NEXT: .LBB41_15: # %cond.load7
+; RV64ZVE32F-NEXT: vsetivli zero, 4, e32, m1, tu, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
; RV64ZVE32F-NEXT: lw a2, 0(a2)
; RV64ZVE32F-NEXT: vmv.s.x v8, a2
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB41_7
-; RV64ZVE32F-NEXT: .LBB41_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB41_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 5, e32, m2, tu, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v12
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -3483,28 +3475,6 @@ define <8 x i32> @mgather_baseidx_v8i32(ptr %base, <8 x i32> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB41_8
; RV64ZVE32F-NEXT: j .LBB41_9
-; RV64ZVE32F-NEXT: .LBB41_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lw a2, 0(a2)
-; RV64ZVE32F-NEXT: vmv.s.x v12, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB41_11
-; RV64ZVE32F-NEXT: .LBB41_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lw a0, 0(a0)
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <8 x i32> %idxs
%v = call <8 x i32> @llvm.masked.gather.v8i32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x i32> %passthru)
ret <8 x i32> %v
@@ -3822,11 +3792,13 @@ define <4 x i64> @mgather_truemask_v4i64(<4 x ptr> %ptrs, <4 x i64> %passthru) {
define <4 x i64> @mgather_falsemask_v4i64(<4 x ptr> %ptrs, <4 x i64> %passthru) {
; RV32V-LABEL: mgather_falsemask_v4i64:
; RV32V: # %bb.0:
+; RV32V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32V-NEXT: vmv2r.v v8, v10
; RV32V-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i64:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv2r.v v8, v10
; RV64V-NEXT: ret
;
@@ -7113,11 +7085,13 @@ define <4 x bfloat> @mgather_truemask_v4bf16(<4 x ptr> %ptrs, <4 x bfloat> %pass
define <4 x bfloat> @mgather_falsemask_v4bf16(<4 x ptr> %ptrs, <4 x bfloat> %passthru) {
; RV32-LABEL: mgather_falsemask_v4bf16:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4bf16:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -7295,13 +7269,13 @@ define <8 x bfloat> @mgather_baseidx_v8i8_v8bf16(ptr %base, <8 x i8> %idxs, <8 x
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB64_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB64_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB64_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB64_15
; RV64ZVE32F-NEXT: .LBB64_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB64_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB64_16
; RV64ZVE32F-NEXT: .LBB64_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB64_9
@@ -7320,14 +7294,35 @@ define <8 x bfloat> @mgather_baseidx_v8i8_v8bf16(ptr %base, <8 x i8> %idxs, <8 x
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB64_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB64_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB64_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB64_16
-; RV64ZVE32F-NEXT: .LBB64_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB64_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB64_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB64_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB64_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 1
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -7338,7 +7333,7 @@ define <8 x bfloat> @mgather_baseidx_v8i8_v8bf16(ptr %base, <8 x i8> %idxs, <8 x
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB64_6
-; RV64ZVE32F-NEXT: .LBB64_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB64_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -7351,7 +7346,7 @@ define <8 x bfloat> @mgather_baseidx_v8i8_v8bf16(ptr %base, <8 x i8> %idxs, <8 x
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB64_7
-; RV64ZVE32F-NEXT: .LBB64_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB64_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -7364,30 +7359,6 @@ define <8 x bfloat> @mgather_baseidx_v8i8_v8bf16(ptr %base, <8 x i8> %idxs, <8 x
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB64_8
; RV64ZVE32F-NEXT: j .LBB64_9
-; RV64ZVE32F-NEXT: .LBB64_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB64_11
-; RV64ZVE32F-NEXT: .LBB64_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds bfloat, ptr %base, <8 x i8> %idxs
%v = call <8 x bfloat> @llvm.masked.gather.v8bf16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x bfloat> %passthru)
ret <8 x bfloat> %v
@@ -7447,13 +7418,13 @@ define <8 x bfloat> @mgather_baseidx_sext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB65_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB65_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB65_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB65_15
; RV64ZVE32F-NEXT: .LBB65_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB65_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB65_16
; RV64ZVE32F-NEXT: .LBB65_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB65_9
@@ -7472,14 +7443,35 @@ define <8 x bfloat> @mgather_baseidx_sext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB65_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB65_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB65_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB65_16
-; RV64ZVE32F-NEXT: .LBB65_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB65_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB65_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB65_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB65_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 1
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -7490,7 +7482,7 @@ define <8 x bfloat> @mgather_baseidx_sext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB65_6
-; RV64ZVE32F-NEXT: .LBB65_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB65_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -7503,7 +7495,7 @@ define <8 x bfloat> @mgather_baseidx_sext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB65_7
-; RV64ZVE32F-NEXT: .LBB65_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB65_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -7516,30 +7508,6 @@ define <8 x bfloat> @mgather_baseidx_sext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB65_8
; RV64ZVE32F-NEXT: j .LBB65_9
-; RV64ZVE32F-NEXT: .LBB65_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB65_11
-; RV64ZVE32F-NEXT: .LBB65_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%eidxs = sext <8 x i8> %idxs to <8 x i16>
%ptrs = getelementptr inbounds bfloat, ptr %base, <8 x i16> %eidxs
%v = call <8 x bfloat> @llvm.masked.gather.v8bf16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x bfloat> %passthru)
@@ -7600,13 +7568,13 @@ define <8 x bfloat> @mgather_baseidx_zext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB66_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB66_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB66_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB66_15
; RV64ZVE32F-NEXT: .LBB66_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB66_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB66_16
; RV64ZVE32F-NEXT: .LBB66_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB66_9
@@ -7626,14 +7594,37 @@ define <8 x bfloat> @mgather_baseidx_zext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB66_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB66_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: andi a2, a2, 255
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB66_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB66_16
-; RV64ZVE32F-NEXT: .LBB66_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB66_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: andi a1, a1, 255
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB66_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB66_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB66_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: andi a2, a2, 255
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -7645,7 +7636,7 @@ define <8 x bfloat> @mgather_baseidx_zext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB66_6
-; RV64ZVE32F-NEXT: .LBB66_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB66_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -7659,7 +7650,7 @@ define <8 x bfloat> @mgather_baseidx_zext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB66_7
-; RV64ZVE32F-NEXT: .LBB66_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB66_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: andi a2, a2, 255
@@ -7673,32 +7664,6 @@ define <8 x bfloat> @mgather_baseidx_zext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB66_8
; RV64ZVE32F-NEXT: j .LBB66_9
-; RV64ZVE32F-NEXT: .LBB66_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: andi a2, a2, 255
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB66_11
-; RV64ZVE32F-NEXT: .LBB66_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: andi a1, a1, 255
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%eidxs = zext <8 x i8> %idxs to <8 x i16>
%ptrs = getelementptr inbounds bfloat, ptr %base, <8 x i16> %eidxs
%v = call <8 x bfloat> @llvm.masked.gather.v8bf16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x bfloat> %passthru)
@@ -7756,13 +7721,13 @@ define <8 x bfloat> @mgather_baseidx_v8bf16(ptr %base, <8 x i16> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB67_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB67_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB67_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB67_15
; RV64ZVE32F-NEXT: .LBB67_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB67_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB67_16
; RV64ZVE32F-NEXT: .LBB67_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB67_9
@@ -7780,14 +7745,33 @@ define <8 x bfloat> @mgather_baseidx_v8bf16(ptr %base, <8 x i16> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB67_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB67_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 1
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-NEXT: .LBB67_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB67_16
-; RV64ZVE32F-NEXT: .LBB67_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB67_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 1
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-NEXT: .LBB67_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB67_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB67_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 1
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -7797,7 +7781,7 @@ define <8 x bfloat> @mgather_baseidx_v8bf16(ptr %base, <8 x i16> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB67_6
-; RV64ZVE32F-NEXT: .LBB67_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB67_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -7809,7 +7793,7 @@ define <8 x bfloat> @mgather_baseidx_v8bf16(ptr %base, <8 x i16> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB67_7
-; RV64ZVE32F-NEXT: .LBB67_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB67_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 5, e16, m1, tu, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-NEXT: slli a2, a2, 1
@@ -7820,28 +7804,6 @@ define <8 x bfloat> @mgather_baseidx_v8bf16(ptr %base, <8 x i16> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB67_8
; RV64ZVE32F-NEXT: j .LBB67_9
-; RV64ZVE32F-NEXT: .LBB67_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 1
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB67_11
-; RV64ZVE32F-NEXT: .LBB67_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 1
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds bfloat, ptr %base, <8 x i16> %idxs
%v = call <8 x bfloat> @llvm.masked.gather.v8bf16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x bfloat> %passthru)
ret <8 x bfloat> %v
@@ -8135,11 +8097,13 @@ define <4 x half> @mgather_truemask_v4f16(<4 x ptr> %ptrs, <4 x half> %passthru)
define <4 x half> @mgather_falsemask_v4f16(<4 x ptr> %ptrs, <4 x half> %passthru) {
; RV32-LABEL: mgather_falsemask_v4f16:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4f16:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -8410,13 +8374,13 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_12
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_14
; RV64ZVE32F-ZVFH-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_13
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_15
; RV64ZVE32F-ZVFH-NEXT: .LBB74_6: # %else8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_14
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_16
; RV64ZVE32F-ZVFH-NEXT: .LBB74_7: # %else11
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB74_9
@@ -8435,14 +8399,35 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_15
-; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB74_11
+; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
+; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFH-NEXT: .LBB74_11: # %else17
; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: bnez a1, .LBB74_16
-; RV64ZVE32F-ZVFH-NEXT: .LBB74_11: # %else20
+; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB74_13
+; RV64ZVE32F-ZVFH-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
+; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFH-NEXT: .LBB74_13: # %else20
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
-; RV64ZVE32F-ZVFH-NEXT: .LBB74_12: # %cond.load4
+; RV64ZVE32F-ZVFH-NEXT: .LBB74_14: # %cond.load4
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
@@ -8453,7 +8438,7 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB74_6
-; RV64ZVE32F-ZVFH-NEXT: .LBB74_13: # %cond.load7
+; RV64ZVE32F-ZVFH-NEXT: .LBB74_15: # %cond.load7
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
@@ -8466,7 +8451,7 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB74_7
-; RV64ZVE32F-ZVFH-NEXT: .LBB74_14: # %cond.load10
+; RV64ZVE32F-ZVFH-NEXT: .LBB74_16: # %cond.load10
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
@@ -8479,30 +8464,6 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB74_8
; RV64ZVE32F-ZVFH-NEXT: j .LBB74_9
-; RV64ZVE32F-ZVFH-NEXT: .LBB74_15: # %cond.load16
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
-; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB74_11
-; RV64ZVE32F-ZVFH-NEXT: .LBB74_16: # %cond.load19
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
-; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFH-NEXT: ret
;
; RV64ZVE32F-ZVFHMIN-LABEL: mgather_baseidx_v8i8_v8f16:
; RV64ZVE32F-ZVFHMIN: # %bb.0:
@@ -8537,13 +8498,13 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_12
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_14
; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_13
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_15
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_6: # %else8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_14
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_16
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_7: # %else11
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB74_9
@@ -8562,14 +8523,35 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_15
-; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB74_11
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_11: # %else17
; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a1, .LBB74_16
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_11: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB74_13
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_13: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_12: # %cond.load4
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_14: # %cond.load4
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
@@ -8580,7 +8562,7 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB74_6
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_13: # %cond.load7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_15: # %cond.load7
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
@@ -8593,7 +8575,7 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB74_7
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_14: # %cond.load10
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_16: # %cond.load10
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
@@ -8606,30 +8588,6 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB74_8
; RV64ZVE32F-ZVFHMIN-NEXT: j .LBB74_9
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_15: # %cond.load16
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB74_11
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_16: # %cond.load19
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFHMIN-NEXT: ret
%ptrs = getelementptr inbounds half, ptr %base, <8 x i8> %idxs
%v = call <8 x half> @llvm.masked.gather.v8f16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x half> %passthru)
ret <8 x half> %v
@@ -8689,13 +8647,13 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_12
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_14
; RV64ZVE32F-ZVFH-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_13
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_15
; RV64ZVE32F-ZVFH-NEXT: .LBB75_6: # %else8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_14
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_16
; RV64ZVE32F-ZVFH-NEXT: .LBB75_7: # %else11
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB75_9
@@ -8714,14 +8672,35 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_15
-; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB75_11
+; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
+; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFH-NEXT: .LBB75_11: # %else17
; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: bnez a1, .LBB75_16
-; RV64ZVE32F-ZVFH-NEXT: .LBB75_11: # %else20
+; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB75_13
+; RV64ZVE32F-ZVFH-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
+; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFH-NEXT: .LBB75_13: # %else20
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
-; RV64ZVE32F-ZVFH-NEXT: .LBB75_12: # %cond.load4
+; RV64ZVE32F-ZVFH-NEXT: .LBB75_14: # %cond.load4
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
@@ -8732,7 +8711,7 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB75_6
-; RV64ZVE32F-ZVFH-NEXT: .LBB75_13: # %cond.load7
+; RV64ZVE32F-ZVFH-NEXT: .LBB75_15: # %cond.load7
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
@@ -8745,7 +8724,7 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB75_7
-; RV64ZVE32F-ZVFH-NEXT: .LBB75_14: # %cond.load10
+; RV64ZVE32F-ZVFH-NEXT: .LBB75_16: # %cond.load10
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
@@ -8758,30 +8737,6 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB75_8
; RV64ZVE32F-ZVFH-NEXT: j .LBB75_9
-; RV64ZVE32F-ZVFH-NEXT: .LBB75_15: # %cond.load16
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
-; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB75_11
-; RV64ZVE32F-ZVFH-NEXT: .LBB75_16: # %cond.load19
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
-; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFH-NEXT: ret
;
; RV64ZVE32F-ZVFHMIN-LABEL: mgather_baseidx_sext_v8i8_v8f16:
; RV64ZVE32F-ZVFHMIN: # %bb.0:
@@ -8816,13 +8771,13 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_12
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_14
; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_13
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_15
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_6: # %else8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_14
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_16
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_7: # %else11
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB75_9
@@ -8841,14 +8796,35 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_15
-; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB75_11
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_11: # %else17
; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a1, .LBB75_16
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_11: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB75_13
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_13: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_12: # %cond.load4
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_14: # %cond.load4
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
@@ -8859,7 +8835,7 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB75_6
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_13: # %cond.load7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_15: # %cond.load7
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
@@ -8872,7 +8848,7 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB75_7
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_14: # %cond.load10
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_16: # %cond.load10
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
@@ -8885,30 +8861,6 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB75_8
; RV64ZVE32F-ZVFHMIN-NEXT: j .LBB75_9
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_15: # %cond.load16
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB75_11
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_16: # %cond.load19
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFHMIN-NEXT: ret
%eidxs = sext <8 x i8> %idxs to <8 x i16>
%ptrs = getelementptr inbounds half, ptr %base, <8 x i16> %eidxs
%v = call <8 x half> @llvm.masked.gather.v8f16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x half> %passthru)
@@ -8969,13 +8921,13 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_12
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_14
; RV64ZVE32F-ZVFH-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_13
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_15
; RV64ZVE32F-ZVFH-NEXT: .LBB76_6: # %else8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_14
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_16
; RV64ZVE32F-ZVFH-NEXT: .LBB76_7: # %else11
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB76_9
@@ -8995,14 +8947,37 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_15
-; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB76_11
+; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFH-NEXT: andi a2, a2, 255
+; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
+; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFH-NEXT: .LBB76_11: # %else17
; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: bnez a1, .LBB76_16
-; RV64ZVE32F-ZVFH-NEXT: .LBB76_11: # %else20
+; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB76_13
+; RV64ZVE32F-ZVFH-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, 255
+; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
+; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFH-NEXT: .LBB76_13: # %else20
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
-; RV64ZVE32F-ZVFH-NEXT: .LBB76_12: # %cond.load4
+; RV64ZVE32F-ZVFH-NEXT: .LBB76_14: # %cond.load4
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a2, 255
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
@@ -9014,7 +8989,7 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB76_6
-; RV64ZVE32F-ZVFH-NEXT: .LBB76_13: # %cond.load7
+; RV64ZVE32F-ZVFH-NEXT: .LBB76_15: # %cond.load7
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
@@ -9028,7 +9003,7 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB76_7
-; RV64ZVE32F-ZVFH-NEXT: .LBB76_14: # %cond.load10
+; RV64ZVE32F-ZVFH-NEXT: .LBB76_16: # %cond.load10
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFH-NEXT: andi a2, a2, 255
@@ -9042,32 +9017,6 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB76_8
; RV64ZVE32F-ZVFH-NEXT: j .LBB76_9
-; RV64ZVE32F-ZVFH-NEXT: .LBB76_15: # %cond.load16
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFH-NEXT: andi a2, a2, 255
-; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
-; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB76_11
-; RV64ZVE32F-ZVFH-NEXT: .LBB76_16: # %cond.load19
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, 255
-; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
-; RV64ZVE32F-ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFH-NEXT: ret
;
; RV64ZVE32F-ZVFHMIN-LABEL: mgather_baseidx_zext_v8i8_v8f16:
; RV64ZVE32F-ZVFHMIN: # %bb.0:
@@ -9104,13 +9053,13 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_12
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_14
; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_13
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_15
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_6: # %else8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_14
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_16
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_7: # %else11
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB76_9
@@ -9130,14 +9079,37 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_15
-; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB76_11
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a2, 255
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_11: # %else17
; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a1, .LBB76_16
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_11: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB76_13
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, 255
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_13: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_12: # %cond.load4
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_14: # %cond.load4
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a2, 255
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
@@ -9149,7 +9121,7 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB76_6
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_13: # %cond.load7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_15: # %cond.load7
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
@@ -9163,7 +9135,7 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB76_7
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_14: # %cond.load10
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_16: # %cond.load10
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a2, 255
@@ -9177,32 +9149,6 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB76_8
; RV64ZVE32F-ZVFHMIN-NEXT: j .LBB76_9
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_15: # %cond.load16
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a2, 255
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB76_11
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_16: # %cond.load19
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, 255
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFHMIN-NEXT: ret
%eidxs = zext <8 x i8> %idxs to <8 x i16>
%ptrs = getelementptr inbounds half, ptr %base, <8 x i16> %eidxs
%v = call <8 x half> @llvm.masked.gather.v8f16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x half> %passthru)
@@ -9260,13 +9206,13 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_12
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_14
; RV64ZVE32F-ZVFH-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_13
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_15
; RV64ZVE32F-ZVFH-NEXT: .LBB77_6: # %else8
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_14
+; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_16
; RV64ZVE32F-ZVFH-NEXT: .LBB77_7: # %else11
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB77_9
@@ -9284,14 +9230,33 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_15
-; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB77_11
+; RV64ZVE32F-ZVFH-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFH-NEXT: .LBB77_11: # %else17
; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: bnez a1, .LBB77_16
-; RV64ZVE32F-ZVFH-NEXT: .LBB77_11: # %else20
+; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB77_13
+; RV64ZVE32F-ZVFH-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
+; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFH-NEXT: .LBB77_13: # %else20
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
-; RV64ZVE32F-ZVFH-NEXT: .LBB77_12: # %cond.load4
+; RV64ZVE32F-ZVFH-NEXT: .LBB77_14: # %cond.load4
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
@@ -9301,7 +9266,7 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB77_6
-; RV64ZVE32F-ZVFH-NEXT: .LBB77_13: # %cond.load7
+; RV64ZVE32F-ZVFH-NEXT: .LBB77_15: # %cond.load7
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
@@ -9313,7 +9278,7 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFH-NEXT: beqz a2, .LBB77_7
-; RV64ZVE32F-ZVFH-NEXT: .LBB77_14: # %cond.load10
+; RV64ZVE32F-ZVFH-NEXT: .LBB77_16: # %cond.load10
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 5, e16, m1, tu, ma
; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
@@ -9324,28 +9289,6 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFH-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFH-NEXT: bnez a2, .LBB77_8
; RV64ZVE32F-ZVFH-NEXT: j .LBB77_9
-; RV64ZVE32F-ZVFH-NEXT: .LBB77_15: # %cond.load16
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFH-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFH-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a2)
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v10, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFH-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFH-NEXT: beqz a1, .LBB77_11
-; RV64ZVE32F-ZVFH-NEXT: .LBB77_16: # %cond.load19
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFH-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFH-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFH-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFH-NEXT: flh fa5, 0(a0)
-; RV64ZVE32F-ZVFH-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFH-NEXT: ret
;
; RV64ZVE32F-ZVFHMIN-LABEL: mgather_baseidx_v8f16:
; RV64ZVE32F-ZVFHMIN: # %bb.0:
@@ -9379,13 +9322,13 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 4
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_12
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_14
; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.5: # %else5
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_13
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_15
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_6: # %else8
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_14
+; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_16
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_7: # %else11
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB77_9
@@ -9403,14 +9346,33 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 64
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v10, 2
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_15
-; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB77_11
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_11: # %else17
; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: bnez a1, .LBB77_16
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_11: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB77_13
+; RV64ZVE32F-ZVFHMIN-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
+; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
+; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
+; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_13: # %else20
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_12: # %cond.load4
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_14: # %cond.load4
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
@@ -9420,7 +9382,7 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v11, 2
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 8
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB77_6
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_13: # %cond.load7
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_15: # %cond.load7
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
@@ -9432,7 +9394,7 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 3
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 16
; RV64ZVE32F-ZVFHMIN-NEXT: beqz a2, .LBB77_7
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_14: # %cond.load10
+; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_16: # %cond.load10
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 5, e16, m1, tu, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v10
; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
@@ -9443,28 +9405,6 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFHMIN-NEXT: andi a2, a1, 32
; RV64ZVE32F-ZVFHMIN-NEXT: bnez a2, .LBB77_8
; RV64ZVE32F-ZVFHMIN-NEXT: j .LBB77_9
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_15: # %cond.load16
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a2, a2, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a2, a0, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a2, 0(a2)
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v10, a2
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 7, e16, m1, tu, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v10, 6
-; RV64ZVE32F-ZVFHMIN-NEXT: andi a1, a1, -128
-; RV64ZVE32F-ZVFHMIN-NEXT: beqz a1, .LBB77_11
-; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_16: # %cond.load19
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-ZVFHMIN-NEXT: slli a1, a1, 1
-; RV64ZVE32F-ZVFHMIN-NEXT: add a0, a0, a1
-; RV64ZVE32F-ZVFHMIN-NEXT: lh a0, 0(a0)
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv.s.x v8, a0
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
-; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
-; RV64ZVE32F-ZVFHMIN-NEXT: ret
%ptrs = getelementptr inbounds half, ptr %base, <8 x i16> %idxs
%v = call <8 x half> @llvm.masked.gather.v8f16.v8p0(<8 x ptr> %ptrs, i32 2, <8 x i1> %m, <8 x half> %passthru)
ret <8 x half> %v
@@ -9666,11 +9606,13 @@ define <4 x float> @mgather_truemask_v4f32(<4 x ptr> %ptrs, <4 x float> %passthr
define <4 x float> @mgather_falsemask_v4f32(<4 x ptr> %ptrs, <4 x float> %passthru) {
; RV32-LABEL: mgather_falsemask_v4f32:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4f32:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -9847,13 +9789,13 @@ define <8 x float> @mgather_baseidx_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <8 x i
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB84_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB84_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB84_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB84_15
; RV64ZVE32F-NEXT: .LBB84_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB84_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB84_16
; RV64ZVE32F-NEXT: .LBB84_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB84_9
@@ -9872,14 +9814,35 @@ define <8 x float> @mgather_baseidx_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <8 x i
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB84_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB84_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: flw fa5, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB84_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB84_16
-; RV64ZVE32F-NEXT: .LBB84_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB84_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB84_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB84_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB84_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -9890,7 +9853,7 @@ define <8 x float> @mgather_baseidx_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <8 x i
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB84_6
-; RV64ZVE32F-NEXT: .LBB84_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB84_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -9903,7 +9866,7 @@ define <8 x float> @mgather_baseidx_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <8 x i
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB84_7
-; RV64ZVE32F-NEXT: .LBB84_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB84_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -9916,30 +9879,6 @@ define <8 x float> @mgather_baseidx_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <8 x i
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB84_8
; RV64ZVE32F-NEXT: j .LBB84_9
-; RV64ZVE32F-NEXT: .LBB84_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: flw fa5, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB84_11
-; RV64ZVE32F-NEXT: .LBB84_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <8 x i8> %idxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
ret <8 x float> %v
@@ -9998,13 +9937,13 @@ define <8 x float> @mgather_baseidx_sext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB85_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB85_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB85_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB85_15
; RV64ZVE32F-NEXT: .LBB85_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB85_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB85_16
; RV64ZVE32F-NEXT: .LBB85_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB85_9
@@ -10023,14 +9962,35 @@ define <8 x float> @mgather_baseidx_sext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB85_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB85_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: flw fa5, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB85_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB85_16
-; RV64ZVE32F-NEXT: .LBB85_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB85_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB85_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB85_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB85_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -10041,7 +10001,7 @@ define <8 x float> @mgather_baseidx_sext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB85_6
-; RV64ZVE32F-NEXT: .LBB85_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB85_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -10054,7 +10014,7 @@ define <8 x float> @mgather_baseidx_sext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB85_7
-; RV64ZVE32F-NEXT: .LBB85_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB85_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -10067,30 +10027,6 @@ define <8 x float> @mgather_baseidx_sext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB85_8
; RV64ZVE32F-NEXT: j .LBB85_9
-; RV64ZVE32F-NEXT: .LBB85_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: flw fa5, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB85_11
-; RV64ZVE32F-NEXT: .LBB85_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = sext <8 x i8> %idxs to <8 x i32>
%ptrs = getelementptr inbounds float, ptr %base, <8 x i32> %eidxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
@@ -10153,13 +10089,13 @@ define <8 x float> @mgather_baseidx_zext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB86_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB86_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB86_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB86_15
; RV64ZVE32F-NEXT: .LBB86_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB86_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB86_16
; RV64ZVE32F-NEXT: .LBB86_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB86_9
@@ -10179,14 +10115,37 @@ define <8 x float> @mgather_baseidx_zext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB86_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB86_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: andi a2, a2, 255
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: flw fa5, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB86_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB86_16
-; RV64ZVE32F-NEXT: .LBB86_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB86_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: andi a1, a1, 255
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB86_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB86_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB86_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: andi a2, a2, 255
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -10198,7 +10157,7 @@ define <8 x float> @mgather_baseidx_zext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB86_6
-; RV64ZVE32F-NEXT: .LBB86_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB86_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -10212,7 +10171,7 @@ define <8 x float> @mgather_baseidx_zext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB86_7
-; RV64ZVE32F-NEXT: .LBB86_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB86_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: andi a2, a2, 255
@@ -10226,32 +10185,6 @@ define <8 x float> @mgather_baseidx_zext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB86_8
; RV64ZVE32F-NEXT: j .LBB86_9
-; RV64ZVE32F-NEXT: .LBB86_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: andi a2, a2, 255
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: flw fa5, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB86_11
-; RV64ZVE32F-NEXT: .LBB86_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, mf4, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: andi a1, a1, 255
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = zext <8 x i8> %idxs to <8 x i32>
%ptrs = getelementptr inbounds float, ptr %base, <8 x i32> %eidxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
@@ -10312,13 +10245,13 @@ define <8 x float> @mgather_baseidx_v8i16_v8f32(ptr %base, <8 x i16> %idxs, <8 x
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB87_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB87_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB87_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB87_15
; RV64ZVE32F-NEXT: .LBB87_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB87_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB87_16
; RV64ZVE32F-NEXT: .LBB87_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB87_9
@@ -10337,14 +10270,35 @@ define <8 x float> @mgather_baseidx_v8i16_v8f32(ptr %base, <8 x i16> %idxs, <8 x
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB87_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB87_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: flw fa5, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB87_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB87_16
-; RV64ZVE32F-NEXT: .LBB87_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB87_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB87_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB87_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB87_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -10355,7 +10309,7 @@ define <8 x float> @mgather_baseidx_v8i16_v8f32(ptr %base, <8 x i16> %idxs, <8 x
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB87_6
-; RV64ZVE32F-NEXT: .LBB87_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB87_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -10368,7 +10322,7 @@ define <8 x float> @mgather_baseidx_v8i16_v8f32(ptr %base, <8 x i16> %idxs, <8 x
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB87_7
-; RV64ZVE32F-NEXT: .LBB87_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB87_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -10381,30 +10335,6 @@ define <8 x float> @mgather_baseidx_v8i16_v8f32(ptr %base, <8 x i16> %idxs, <8 x
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB87_8
; RV64ZVE32F-NEXT: j .LBB87_9
-; RV64ZVE32F-NEXT: .LBB87_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: flw fa5, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB87_11
-; RV64ZVE32F-NEXT: .LBB87_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <8 x i16> %idxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
ret <8 x float> %v
@@ -10464,13 +10394,13 @@ define <8 x float> @mgather_baseidx_sext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB88_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB88_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB88_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB88_15
; RV64ZVE32F-NEXT: .LBB88_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB88_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB88_16
; RV64ZVE32F-NEXT: .LBB88_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB88_9
@@ -10489,14 +10419,35 @@ define <8 x float> @mgather_baseidx_sext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB88_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB88_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: flw fa5, 0(a2)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB88_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB88_16
-; RV64ZVE32F-NEXT: .LBB88_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB88_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB88_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB88_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB88_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -10507,7 +10458,7 @@ define <8 x float> @mgather_baseidx_sext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB88_6
-; RV64ZVE32F-NEXT: .LBB88_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB88_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -10520,7 +10471,7 @@ define <8 x float> @mgather_baseidx_sext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB88_7
-; RV64ZVE32F-NEXT: .LBB88_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB88_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v9
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -10533,30 +10484,6 @@ define <8 x float> @mgather_baseidx_sext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB88_8
; RV64ZVE32F-NEXT: j .LBB88_9
-; RV64ZVE32F-NEXT: .LBB88_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: flw fa5, 0(a2)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB88_11
-; RV64ZVE32F-NEXT: .LBB88_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = sext <8 x i16> %idxs to <8 x i32>
%ptrs = getelementptr inbounds float, ptr %base, <8 x i32> %eidxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
@@ -10620,13 +10547,13 @@ define <8 x float> @mgather_baseidx_zext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: andi a3, a2, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a3, .LBB89_12
+; RV64ZVE32F-NEXT: bnez a3, .LBB89_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a3, a2, 8
-; RV64ZVE32F-NEXT: bnez a3, .LBB89_13
+; RV64ZVE32F-NEXT: bnez a3, .LBB89_15
; RV64ZVE32F-NEXT: .LBB89_6: # %else8
; RV64ZVE32F-NEXT: andi a3, a2, 16
-; RV64ZVE32F-NEXT: bnez a3, .LBB89_14
+; RV64ZVE32F-NEXT: bnez a3, .LBB89_16
; RV64ZVE32F-NEXT: .LBB89_7: # %else11
; RV64ZVE32F-NEXT: andi a3, a2, 32
; RV64ZVE32F-NEXT: beqz a3, .LBB89_9
@@ -10646,14 +10573,37 @@ define <8 x float> @mgather_baseidx_zext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: andi a3, a2, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v9, 2
-; RV64ZVE32F-NEXT: bnez a3, .LBB89_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a3, .LBB89_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a3, v8
+; RV64ZVE32F-NEXT: and a3, a3, a1
+; RV64ZVE32F-NEXT: slli a3, a3, 2
+; RV64ZVE32F-NEXT: add a3, a0, a3
+; RV64ZVE32F-NEXT: flw fa5, 0(a3)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB89_11: # %else17
; RV64ZVE32F-NEXT: andi a2, a2, -128
-; RV64ZVE32F-NEXT: bnez a2, .LBB89_16
-; RV64ZVE32F-NEXT: .LBB89_11: # %else20
+; RV64ZVE32F-NEXT: beqz a2, .LBB89_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: and a1, a2, a1
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB89_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB89_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB89_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a3, v8
; RV64ZVE32F-NEXT: and a3, a3, a1
; RV64ZVE32F-NEXT: slli a3, a3, 2
@@ -10665,7 +10615,7 @@ define <8 x float> @mgather_baseidx_zext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 2
; RV64ZVE32F-NEXT: andi a3, a2, 8
; RV64ZVE32F-NEXT: beqz a3, .LBB89_6
-; RV64ZVE32F-NEXT: .LBB89_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB89_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a3, v8
@@ -10679,7 +10629,7 @@ define <8 x float> @mgather_baseidx_zext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a3, a2, 16
; RV64ZVE32F-NEXT: beqz a3, .LBB89_7
-; RV64ZVE32F-NEXT: .LBB89_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB89_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vmv.x.s a3, v9
; RV64ZVE32F-NEXT: and a3, a3, a1
@@ -10693,32 +10643,6 @@ define <8 x float> @mgather_baseidx_zext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: andi a3, a2, 32
; RV64ZVE32F-NEXT: bnez a3, .LBB89_8
; RV64ZVE32F-NEXT: j .LBB89_9
-; RV64ZVE32F-NEXT: .LBB89_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a3, v8
-; RV64ZVE32F-NEXT: and a3, a3, a1
-; RV64ZVE32F-NEXT: slli a3, a3, 2
-; RV64ZVE32F-NEXT: add a3, a0, a3
-; RV64ZVE32F-NEXT: flw fa5, 0(a3)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a2, a2, -128
-; RV64ZVE32F-NEXT: beqz a2, .LBB89_11
-; RV64ZVE32F-NEXT: .LBB89_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e16, mf2, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: and a1, a2, a1
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vsetvli zero, zero, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%eidxs = zext <8 x i16> %idxs to <8 x i32>
%ptrs = getelementptr inbounds float, ptr %base, <8 x i32> %eidxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
@@ -10775,13 +10699,13 @@ define <8 x float> @mgather_baseidx_v8f32(ptr %base, <8 x i32> %idxs, <8 x i1> %
; RV64ZVE32F-NEXT: andi a2, a1, 4
; RV64ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB90_12
+; RV64ZVE32F-NEXT: bnez a2, .LBB90_14
; RV64ZVE32F-NEXT: # %bb.5: # %else5
; RV64ZVE32F-NEXT: andi a2, a1, 8
-; RV64ZVE32F-NEXT: bnez a2, .LBB90_13
+; RV64ZVE32F-NEXT: bnez a2, .LBB90_15
; RV64ZVE32F-NEXT: .LBB90_6: # %else8
; RV64ZVE32F-NEXT: andi a2, a1, 16
-; RV64ZVE32F-NEXT: bnez a2, .LBB90_14
+; RV64ZVE32F-NEXT: bnez a2, .LBB90_16
; RV64ZVE32F-NEXT: .LBB90_7: # %else11
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: beqz a2, .LBB90_9
@@ -10799,14 +10723,33 @@ define <8 x float> @mgather_baseidx_v8f32(ptr %base, <8 x i32> %idxs, <8 x i1> %
; RV64ZVE32F-NEXT: andi a2, a1, 64
; RV64ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v12, 2
-; RV64ZVE32F-NEXT: bnez a2, .LBB90_15
-; RV64ZVE32F-NEXT: # %bb.10: # %else17
+; RV64ZVE32F-NEXT: beqz a2, .LBB90_11
+; RV64ZVE32F-NEXT: # %bb.10: # %cond.load16
+; RV64ZVE32F-NEXT: vmv.x.s a2, v8
+; RV64ZVE32F-NEXT: slli a2, a2, 2
+; RV64ZVE32F-NEXT: add a2, a0, a2
+; RV64ZVE32F-NEXT: flw fa5, 0(a2)
+; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
+; RV64ZVE32F-NEXT: .LBB90_11: # %else17
; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: bnez a1, .LBB90_16
-; RV64ZVE32F-NEXT: .LBB90_11: # %else20
+; RV64ZVE32F-NEXT: beqz a1, .LBB90_13
+; RV64ZVE32F-NEXT: # %bb.12: # %cond.load19
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e32, m1, ta, ma
+; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
+; RV64ZVE32F-NEXT: vmv.x.s a1, v8
+; RV64ZVE32F-NEXT: slli a1, a1, 2
+; RV64ZVE32F-NEXT: add a0, a0, a1
+; RV64ZVE32F-NEXT: flw fa5, 0(a0)
+; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
+; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
+; RV64ZVE32F-NEXT: .LBB90_13: # %else20
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
-; RV64ZVE32F-NEXT: .LBB90_12: # %cond.load4
+; RV64ZVE32F-NEXT: .LBB90_14: # %cond.load4
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: slli a2, a2, 2
; RV64ZVE32F-NEXT: add a2, a0, a2
@@ -10816,7 +10759,7 @@ define <8 x float> @mgather_baseidx_v8f32(ptr %base, <8 x i32> %idxs, <8 x i1> %
; RV64ZVE32F-NEXT: vslideup.vi v10, v9, 2
; RV64ZVE32F-NEXT: andi a2, a1, 8
; RV64ZVE32F-NEXT: beqz a2, .LBB90_6
-; RV64ZVE32F-NEXT: .LBB90_13: # %cond.load7
+; RV64ZVE32F-NEXT: .LBB90_15: # %cond.load7
; RV64ZVE32F-NEXT: vsetivli zero, 4, e32, m1, tu, ma
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
@@ -10827,7 +10770,7 @@ define <8 x float> @mgather_baseidx_v8f32(ptr %base, <8 x i32> %idxs, <8 x i1> %
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 3
; RV64ZVE32F-NEXT: andi a2, a1, 16
; RV64ZVE32F-NEXT: beqz a2, .LBB90_7
-; RV64ZVE32F-NEXT: .LBB90_14: # %cond.load10
+; RV64ZVE32F-NEXT: .LBB90_16: # %cond.load10
; RV64ZVE32F-NEXT: vsetivli zero, 5, e32, m2, tu, ma
; RV64ZVE32F-NEXT: vmv.x.s a2, v12
; RV64ZVE32F-NEXT: slli a2, a2, 2
@@ -10838,28 +10781,6 @@ define <8 x float> @mgather_baseidx_v8f32(ptr %base, <8 x i32> %idxs, <8 x i1> %
; RV64ZVE32F-NEXT: andi a2, a1, 32
; RV64ZVE32F-NEXT: bnez a2, .LBB90_8
; RV64ZVE32F-NEXT: j .LBB90_9
-; RV64ZVE32F-NEXT: .LBB90_15: # %cond.load16
-; RV64ZVE32F-NEXT: vmv.x.s a2, v8
-; RV64ZVE32F-NEXT: slli a2, a2, 2
-; RV64ZVE32F-NEXT: add a2, a0, a2
-; RV64ZVE32F-NEXT: flw fa5, 0(a2)
-; RV64ZVE32F-NEXT: vfmv.s.f v12, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 7, e32, m2, tu, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v12, 6
-; RV64ZVE32F-NEXT: andi a1, a1, -128
-; RV64ZVE32F-NEXT: beqz a1, .LBB90_11
-; RV64ZVE32F-NEXT: .LBB90_16: # %cond.load19
-; RV64ZVE32F-NEXT: vsetivli zero, 1, e32, m1, ta, ma
-; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: vmv.x.s a1, v8
-; RV64ZVE32F-NEXT: slli a1, a1, 2
-; RV64ZVE32F-NEXT: add a0, a0, a1
-; RV64ZVE32F-NEXT: flw fa5, 0(a0)
-; RV64ZVE32F-NEXT: vfmv.s.f v8, fa5
-; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
-; RV64ZVE32F-NEXT: vmv2r.v v8, v10
-; RV64ZVE32F-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <8 x i32> %idxs
%v = call <8 x float> @llvm.masked.gather.v8f32.v8p0(<8 x ptr> %ptrs, i32 4, <8 x i1> %m, <8 x float> %passthru)
ret <8 x float> %v
@@ -11135,11 +11056,13 @@ define <4 x double> @mgather_truemask_v4f64(<4 x ptr> %ptrs, <4 x double> %passt
define <4 x double> @mgather_falsemask_v4f64(<4 x ptr> %ptrs, <4 x double> %passthru) {
; RV32V-LABEL: mgather_falsemask_v4f64:
; RV32V: # %bb.0:
+; RV32V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32V-NEXT: vmv2r.v v8, v10
; RV32V-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4f64:
; RV64V: # %bb.0:
+; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64V-NEXT: vmv2r.v v8, v10
; RV64V-NEXT: ret
;
@@ -13700,6 +13623,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
; RV64ZVE32F-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 15
; RV64ZVE32F-NEXT: .LBB107_24: # %else44
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB107_25: # %cond.load4
@@ -14086,6 +14010,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
; RV64ZVE32F-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 31
; RV64ZVE32F-NEXT: .LBB108_48: # %else92
+; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB108_49: # %cond.load4
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
index e0cf39c75da240..e5f3e22361f635 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
@@ -318,6 +318,7 @@ define <128 x i16> @masked_load_v128i16(ptr %a, <128 x i1> %mask) {
define <256 x i8> @masked_load_v256i8(ptr %a, <256 x i1> %mask) {
; CHECK-LABEL: masked_load_v256i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v8
; CHECK-NEXT: li a1, 128
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
index 46c2033d28b387..e614307f35dcc0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
@@ -135,6 +135,7 @@ declare <16 x half> @llvm.vp.nearbyint.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_nearbyint_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI6_0)
; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -263,6 +264,7 @@ declare <8 x float> @llvm.vp.nearbyint.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_nearbyint_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -307,6 +309,7 @@ declare <16 x float> @llvm.vp.nearbyint.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_nearbyint_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -393,6 +396,7 @@ declare <4 x double> @llvm.vp.nearbyint.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_nearbyint_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -437,6 +441,7 @@ declare <8 x double> @llvm.vp.nearbyint.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_nearbyint_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -481,6 +486,7 @@ declare <15 x double> @llvm.vp.nearbyint.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_nearbyint_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -525,6 +531,7 @@ declare <16 x double> @llvm.vp.nearbyint.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_nearbyint_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -569,9 +576,9 @@ declare <32 x double> @llvm.vp.nearbyint.v32f64(<32 x double>, <32 x i1>, i32)
define <32 x double> @vp_nearbyint_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v32f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v6, v0
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v7, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
index ad358d73202402..ddb16fe11719a6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
@@ -23,6 +23,7 @@ declare i1 @llvm.vp.reduce.or.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_or_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -39,6 +40,7 @@ declare i1 @llvm.vp.reduce.xor.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_xor_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -71,6 +73,7 @@ declare i1 @llvm.vp.reduce.or.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_or_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -87,6 +90,7 @@ declare i1 @llvm.vp.reduce.xor.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_xor_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -119,6 +123,7 @@ declare i1 @llvm.vp.reduce.or.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_or_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -135,6 +140,7 @@ declare i1 @llvm.vp.reduce.xor.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_xor_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -167,6 +173,7 @@ declare i1 @llvm.vp.reduce.or.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_or_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -183,6 +190,7 @@ declare i1 @llvm.vp.reduce.xor.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_xor_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -231,6 +239,7 @@ declare i1 @llvm.vp.reduce.and.v256i1(i1, <256 x i1>, <256 x i1>, i32)
define zeroext i1 @vpreduce_and_v256i1(i1 zeroext %s, <256 x i1> %v, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_and_v256i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v9
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: li a3, 128
@@ -265,6 +274,7 @@ declare i1 @llvm.vp.reduce.or.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_or_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -281,6 +291,7 @@ declare i1 @llvm.vp.reduce.xor.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_xor_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -297,6 +308,7 @@ declare i1 @llvm.vp.reduce.add.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_add_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -313,6 +325,7 @@ declare i1 @llvm.vp.reduce.add.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_add_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -329,6 +342,7 @@ declare i1 @llvm.vp.reduce.add.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_add_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -345,6 +359,7 @@ declare i1 @llvm.vp.reduce.add.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_add_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -361,6 +376,7 @@ declare i1 @llvm.vp.reduce.add.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_add_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -489,6 +505,7 @@ declare i1 @llvm.vp.reduce.smin.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_smin_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -505,6 +522,7 @@ declare i1 @llvm.vp.reduce.smin.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_smin_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -521,6 +539,7 @@ declare i1 @llvm.vp.reduce.smin.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_smin_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -537,6 +556,7 @@ declare i1 @llvm.vp.reduce.smin.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_smin_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -553,6 +573,7 @@ declare i1 @llvm.vp.reduce.smin.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_smin_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -569,6 +590,7 @@ declare i1 @llvm.vp.reduce.smin.v32i1(i1, <32 x i1>, <32 x i1>, i32)
define zeroext i1 @vpreduce_smin_v32i1(i1 zeroext %s, <32 x i1> %v, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -585,6 +607,7 @@ declare i1 @llvm.vp.reduce.smin.v64i1(i1, <64 x i1>, <64 x i1>, i32)
define zeroext i1 @vpreduce_smin_v64i1(i1 zeroext %s, <64 x i1> %v, <64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -601,6 +624,7 @@ declare i1 @llvm.vp.reduce.umax.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_umax_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -617,6 +641,7 @@ declare i1 @llvm.vp.reduce.umax.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_umax_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -633,6 +658,7 @@ declare i1 @llvm.vp.reduce.umax.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_umax_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -649,6 +675,7 @@ declare i1 @llvm.vp.reduce.umax.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_umax_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -665,6 +692,7 @@ declare i1 @llvm.vp.reduce.umax.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_umax_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -681,6 +709,7 @@ declare i1 @llvm.vp.reduce.umax.v32i1(i1, <32 x i1>, <32 x i1>, i32)
define zeroext i1 @vpreduce_umax_v32i1(i1 zeroext %s, <32 x i1> %v, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -697,6 +726,7 @@ declare i1 @llvm.vp.reduce.umax.v64i1(i1, <64 x i1>, <64 x i1>, i32)
define zeroext i1 @vpreduce_umax_v64i1(i1 zeroext %s, <64 x i1> %v, <64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
index b8617fda3aa7ec..b45d18d01f67a2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
@@ -123,6 +123,7 @@ declare <16 x half> @llvm.vp.rint.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_rint_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI6_0)
; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -239,6 +240,7 @@ declare <8 x float> @llvm.vp.rint.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_rint_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -279,6 +281,7 @@ declare <16 x float> @llvm.vp.rint.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_rint_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -357,6 +360,7 @@ declare <4 x double> @llvm.vp.rint.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_rint_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -397,6 +401,7 @@ declare <8 x double> @llvm.vp.rint.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_rint_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -437,6 +442,7 @@ declare <15 x double> @llvm.vp.rint.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_rint_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -477,6 +483,7 @@ declare <16 x double> @llvm.vp.rint.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_rint_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -517,9 +524,9 @@ declare <32 x double> @llvm.vp.rint.v32f64(<32 x double>, <32 x i1>, i32)
define <32 x double> @vp_rint_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v32f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v6, v0
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v7, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
index 820a05e3d6042b..0c23a71b9af3f4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
@@ -194,8 +194,8 @@ define <8 x half> @vp_round_v8f16(<8 x half> %va, <8 x i1> %m, i32 zeroext %evl)
;
; ZVFHMIN-LABEL: vp_round_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -261,6 +261,7 @@ declare <16 x half> @llvm.vp.round.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_round_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -280,8 +281,8 @@ define <16 x half> @vp_round_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %
;
; ZVFHMIN-LABEL: vp_round_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -431,6 +432,7 @@ declare <8 x float> @llvm.vp.round.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_round_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -475,6 +477,7 @@ declare <16 x float> @llvm.vp.round.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_round_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -561,6 +564,7 @@ declare <4 x double> @llvm.vp.round.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_round_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -605,6 +609,7 @@ declare <8 x double> @llvm.vp.round.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_round_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -649,6 +654,7 @@ declare <15 x double> @llvm.vp.round.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_round_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -693,6 +699,7 @@ declare <16 x double> @llvm.vp.round.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_round_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -743,6 +750,7 @@ define <32 x double> @vp_round_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -750,7 +758,6 @@ define <32 x double> @vp_round_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v24, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
index 8391c7939180a0..eed410343999dd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
@@ -194,8 +194,8 @@ define <8 x half> @vp_roundeven_v8f16(<8 x half> %va, <8 x i1> %m, i32 zeroext %
;
; ZVFHMIN-LABEL: vp_roundeven_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -261,6 +261,7 @@ declare <16 x half> @llvm.vp.roundeven.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_roundeven_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -280,8 +281,8 @@ define <16 x half> @vp_roundeven_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroe
;
; ZVFHMIN-LABEL: vp_roundeven_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -431,6 +432,7 @@ declare <8 x float> @llvm.vp.roundeven.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_roundeven_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -475,6 +477,7 @@ declare <16 x float> @llvm.vp.roundeven.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_roundeven_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -561,6 +564,7 @@ declare <4 x double> @llvm.vp.roundeven.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_roundeven_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -605,6 +609,7 @@ declare <8 x double> @llvm.vp.roundeven.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_roundeven_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -649,6 +654,7 @@ declare <15 x double> @llvm.vp.roundeven.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_roundeven_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -693,6 +699,7 @@ declare <16 x double> @llvm.vp.roundeven.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_roundeven_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -743,6 +750,7 @@ define <32 x double> @vp_roundeven_v32f64(<32 x double> %va, <32 x i1> %m, i32 z
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -750,7 +758,6 @@ define <32 x double> @vp_roundeven_v32f64(<32 x double> %va, <32 x i1> %m, i32 z
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v24, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
index 8c38d244602655..fb3015e23a9495 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
@@ -194,8 +194,8 @@ define <8 x half> @vp_roundtozero_v8f16(<8 x half> %va, <8 x i1> %m, i32 zeroext
;
; ZVFHMIN-LABEL: vp_roundtozero_v8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -261,6 +261,7 @@ declare <16 x half> @llvm.vp.roundtozero.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_roundtozero_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_v16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -280,8 +281,8 @@ define <16 x half> @vp_roundtozero_v16f16(<16 x half> %va, <16 x i1> %m, i32 zer
;
; ZVFHMIN-LABEL: vp_roundtozero_v16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -431,6 +432,7 @@ declare <8 x float> @llvm.vp.roundtozero.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_roundtozero_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -475,6 +477,7 @@ declare <16 x float> @llvm.vp.roundtozero.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_roundtozero_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -561,6 +564,7 @@ declare <4 x double> @llvm.vp.roundtozero.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_roundtozero_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -605,6 +609,7 @@ declare <8 x double> @llvm.vp.roundtozero.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_roundtozero_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -649,6 +654,7 @@ declare <15 x double> @llvm.vp.roundtozero.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_roundtozero_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v15f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -693,6 +699,7 @@ declare <16 x double> @llvm.vp.roundtozero.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_roundtozero_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
@@ -743,6 +750,7 @@ define <32 x double> @vp_roundtozero_v32f64(<32 x double> %va, <32 x i1> %m, i32
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v25, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -750,7 +758,6 @@ define <32 x double> @vp_roundtozero_v32f64(<32 x double> %va, <32 x i1> %m, i32
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v24, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB26_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
index d52c42891fcc3b..11b0edcd77bb6b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
@@ -598,6 +598,7 @@ define <256 x i1> @icmp_eq_vv_v256i8(<256 x i8> %va, <256 x i8> %vb, <256 x i1>
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -648,6 +649,7 @@ define <256 x i1> @icmp_eq_vv_v256i8(<256 x i8> %va, <256 x i8> %vb, <256 x i1>
define <256 x i1> @icmp_eq_vx_v256i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_v256i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
@@ -677,6 +679,7 @@ define <256 x i1> @icmp_eq_vx_v256i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 z
define <256 x i1> @icmp_eq_vx_swap_v256i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_swap_v256i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll
index 38026bb591f797..0ae55f035cc9bd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll
@@ -8,8 +8,8 @@
define <8 x i32> @concat_2xv4i32(<4 x i32> %a, <4 x i32> %b) {
; VLA-LABEL: concat_2xv4i32:
; VLA: # %bb.0:
-; VLA-NEXT: vmv1r.v v10, v9
; VLA-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; VLA-NEXT: vmv1r.v v10, v9
; VLA-NEXT: vslideup.vi v8, v10, 4
; VLA-NEXT: ret
;
@@ -32,9 +32,9 @@ define <8 x i32> @concat_4xv2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %c, <2 x
;
; VLS-LABEL: concat_4xv2i32:
; VLS: # %bb.0:
+; VLS-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; VLS-NEXT: vmv1r.v v13, v10
; VLS-NEXT: vmv1r.v v12, v8
-; VLS-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; VLS-NEXT: vslideup.vi v13, v11, 2
; VLS-NEXT: vslideup.vi v12, v9, 2
; VLS-NEXT: vmv2r.v v8, v12
@@ -62,9 +62,9 @@ define <8 x i32> @concat_8xv1i32(<1 x i32> %a, <1 x i32> %b, <1 x i32> %c, <1 x
;
; VLS-LABEL: concat_8xv1i32:
; VLS: # %bb.0:
+; VLS-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; VLS-NEXT: vmv1r.v v17, v12
; VLS-NEXT: vmv1r.v v16, v8
-; VLS-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; VLS-NEXT: vslideup.vi v14, v15, 1
; VLS-NEXT: vslideup.vi v17, v13, 1
; VLS-NEXT: vsetivli zero, 4, e32, m1, ta, ma
@@ -89,8 +89,8 @@ define <8 x i32> @concat_8xv1i32(<1 x i32> %a, <1 x i32> %b, <1 x i32> %c, <1 x
define <16 x i32> @concat_2xv8i32(<8 x i32> %a, <8 x i32> %b) {
; VLA-LABEL: concat_2xv8i32:
; VLA: # %bb.0:
-; VLA-NEXT: vmv2r.v v12, v10
; VLA-NEXT: vsetivli zero, 16, e32, m4, ta, ma
+; VLA-NEXT: vmv2r.v v12, v10
; VLA-NEXT: vslideup.vi v8, v12, 8
; VLA-NEXT: ret
;
@@ -104,10 +104,10 @@ define <16 x i32> @concat_2xv8i32(<8 x i32> %a, <8 x i32> %b) {
define <16 x i32> @concat_4xv4i32(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c, <4 x i32> %d) {
; VLA-LABEL: concat_4xv4i32:
; VLA: # %bb.0:
+; VLA-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; VLA-NEXT: vmv1r.v v14, v11
; VLA-NEXT: vmv1r.v v12, v10
; VLA-NEXT: vmv1r.v v10, v9
-; VLA-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; VLA-NEXT: vslideup.vi v12, v14, 4
; VLA-NEXT: vslideup.vi v8, v10, 4
; VLA-NEXT: vsetivli zero, 16, e32, m4, ta, ma
@@ -140,11 +140,11 @@ define <16 x i32> @concat_8xv2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %c, <2 x
;
; VLS-LABEL: concat_8xv2i32:
; VLS: # %bb.0:
+; VLS-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; VLS-NEXT: vmv1r.v v19, v14
; VLS-NEXT: vmv1r.v v18, v12
; VLS-NEXT: vmv1r.v v17, v10
; VLS-NEXT: vmv1r.v v16, v8
-; VLS-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; VLS-NEXT: vslideup.vi v19, v15, 2
; VLS-NEXT: vslideup.vi v18, v13, 2
; VLS-NEXT: vslideup.vi v17, v11, 2
@@ -164,6 +164,7 @@ define <16 x i32> @concat_8xv2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %c, <2 x
define <32 x i32> @concat_2xv16i32(<16 x i32> %a, <16 x i32> %b) {
; VLA-LABEL: concat_2xv16i32:
; VLA: # %bb.0:
+; VLA-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; VLA-NEXT: vmv4r.v v16, v12
; VLA-NEXT: li a0, 32
; VLA-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -180,11 +181,11 @@ define <32 x i32> @concat_2xv16i32(<16 x i32> %a, <16 x i32> %b) {
define <32 x i32> @concat_4xv8i32(<8 x i32> %a, <8 x i32> %b, <8 x i32> %c, <8 x i32> %d) {
; VLA-LABEL: concat_4xv8i32:
; VLA: # %bb.0:
+; VLA-NEXT: vsetivli zero, 16, e32, m4, ta, ma
; VLA-NEXT: vmv2r.v v20, v14
; VLA-NEXT: vmv2r.v v16, v12
; VLA-NEXT: vmv2r.v v12, v10
; VLA-NEXT: li a0, 32
-; VLA-NEXT: vsetivli zero, 16, e32, m4, ta, ma
; VLA-NEXT: vslideup.vi v16, v20, 8
; VLA-NEXT: vslideup.vi v8, v12, 8
; VLA-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -203,6 +204,7 @@ define <32 x i32> @concat_4xv8i32(<8 x i32> %a, <8 x i32> %b, <8 x i32> %c, <8 x
define <32 x i32> @concat_8xv4i32(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c, <4 x i32> %d, <4 x i32> %e, <4 x i32> %f, <4 x i32> %g, <4 x i32> %h) {
; VLA-LABEL: concat_8xv4i32:
; VLA: # %bb.0:
+; VLA-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; VLA-NEXT: vmv1r.v v18, v15
; VLA-NEXT: vmv1r.v v20, v14
; VLA-NEXT: vmv1r.v v14, v13
@@ -211,7 +213,6 @@ define <32 x i32> @concat_8xv4i32(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c, <4 x
; VLA-NEXT: vmv1r.v v12, v10
; VLA-NEXT: vmv1r.v v10, v9
; VLA-NEXT: li a0, 32
-; VLA-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; VLA-NEXT: vslideup.vi v20, v18, 4
; VLA-NEXT: vslideup.vi v16, v14, 4
; VLA-NEXT: vslideup.vi v12, v22, 4
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll
index d461fa8378cffc..d560977d25e085 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll
@@ -108,6 +108,7 @@ define <4 x i64> @m2_splat_into_identity(<4 x i64> %v1) vscale_range(2,2) {
define <4 x i64> @m2_broadcast_i128(<4 x i64> %v1) vscale_range(2,2) {
; CHECK-LABEL: m2_broadcast_i128:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: ret
%res = shufflevector <4 x i64> %v1, <4 x i64> poison, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
@@ -117,6 +118,7 @@ define <4 x i64> @m2_broadcast_i128(<4 x i64> %v1) vscale_range(2,2) {
define <8 x i64> @m4_broadcast_i128(<8 x i64> %v1) vscale_range(2,2) {
; CHECK-LABEL: m4_broadcast_i128:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vmv1r.v v11, v8
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll
index 407535831aedad..f7647ff38c8a08 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll
@@ -966,9 +966,9 @@ define <16 x i8> @reverse_v16i8_2(<8 x i8> %a, <8 x i8> %b) {
define <32 x i8> @reverse_v32i8_2(<16 x i8> %a, <16 x i8> %b) {
; CHECK-LABEL: reverse_v32i8_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m2, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
; CHECK-NEXT: vid.v v12
; CHECK-NEXT: addi a1, a0, -1
; CHECK-NEXT: vrsub.vx v12, v12, a1
@@ -1035,9 +1035,9 @@ define <8 x i16> @reverse_v8i16_2(<4 x i16> %a, <4 x i16> %b) {
define <16 x i16> @reverse_v16i16_2(<8 x i16> %a, <8 x i16> %b) {
; CHECK-LABEL: reverse_v16i16_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: srli a1, a0, 1
; CHECK-NEXT: addi a1, a1, -1
@@ -1060,9 +1060,9 @@ define <16 x i16> @reverse_v16i16_2(<8 x i16> %a, <8 x i16> %b) {
define <32 x i16> @reverse_v32i16_2(<16 x i16> %a, <16 x i16> %b) {
; CHECK-LABEL: reverse_v32i16_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v10
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: lui a1, 16
; CHECK-NEXT: addi a1, a1, -1
@@ -1116,9 +1116,9 @@ define <4 x i32> @reverse_v4i32_2(<2 x i32> %a, < 2 x i32> %b) {
define <8 x i32> @reverse_v8i32_2(<4 x i32> %a, <4 x i32> %b) {
; CHECK-LABEL: reverse_v8i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: srli a1, a0, 2
; CHECK-NEXT: addi a1, a1, -1
@@ -1142,9 +1142,9 @@ define <8 x i32> @reverse_v8i32_2(<4 x i32> %a, <4 x i32> %b) {
define <16 x i32> @reverse_v16i32_2(<8 x i32> %a, <8 x i32> %b) {
; CHECK-LABEL: reverse_v16i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v10
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: srli a1, a0, 2
; CHECK-NEXT: addi a1, a1, -1
@@ -1170,9 +1170,9 @@ define <16 x i32> @reverse_v16i32_2(<8 x i32> %a, <8 x i32> %b) {
define <32 x i32> @reverse_v32i32_2(<16 x i32> %a, <16 x i32> %b) {
; CHECK-LABEL: reverse_v32i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv4r.v v16, v12
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v12
; CHECK-NEXT: srli a1, a0, 2
; CHECK-NEXT: addi a1, a1, -1
@@ -1219,9 +1219,9 @@ define <4 x i64> @reverse_v4i64_2(<2 x i64> %a, < 2 x i64> %b) {
define <8 x i64> @reverse_v8i64_2(<4 x i64> %a, <4 x i64> %b) {
; CHECK-LABEL: reverse_v8i64_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v10
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e64, m1, ta, ma
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: srli a1, a0, 3
; CHECK-NEXT: addi a1, a1, -1
@@ -1289,9 +1289,9 @@ define <8 x half> @reverse_v8f16_2(<4 x half> %a, <4 x half> %b) {
define <16 x half> @reverse_v16f16_2(<8 x half> %a, <8 x half> %b) {
; CHECK-LABEL: reverse_v16f16_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: srli a1, a0, 1
; CHECK-NEXT: addi a1, a1, -1
@@ -1361,9 +1361,9 @@ define <4 x float> @reverse_v4f32_2(<2 x float> %a, <2 x float> %b) {
define <8 x float> @reverse_v8f32_2(<4 x float> %a, <4 x float> %b) {
; CHECK-LABEL: reverse_v8f32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: srli a1, a0, 2
; CHECK-NEXT: addi a1, a1, -1
@@ -1387,9 +1387,9 @@ define <8 x float> @reverse_v8f32_2(<4 x float> %a, <4 x float> %b) {
define <16 x float> @reverse_v16f32_2(<8 x float> %a, <8 x float> %b) {
; CHECK-LABEL: reverse_v16f32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v10
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: srli a1, a0, 2
; CHECK-NEXT: addi a1, a1, -1
@@ -1430,9 +1430,9 @@ define <4 x double> @reverse_v4f64_2(<2 x double> %a, < 2 x double> %b) {
define <8 x double> @reverse_v8f64_2(<4 x double> %a, <4 x double> %b) {
; CHECK-LABEL: reverse_v8f64_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v10
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e64, m1, ta, ma
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: srli a1, a0, 3
; CHECK-NEXT: addi a1, a1, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll
index c37c3a9ee0ea0c..54185a64202485 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll
@@ -415,8 +415,8 @@ define <4 x i8> @vslide1up_4xi8_neg_incorrect_insert3(<4 x i8> %v, i8 %b) {
define <2 x i8> @vslide1up_4xi8_neg_length_changing(<4 x i8> %v, i8 %b) {
; CHECK-LABEL: vslide1up_4xi8_neg_length_changing:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetivli zero, 4, e8, m1, tu, ma
+; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetivli zero, 2, e8, mf8, ta, ma
; CHECK-NEXT: vslideup.vi v9, v8, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
index 1a08c613ca36a3..e6edbe6afb9a7b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
@@ -62,6 +62,7 @@ define void @gather_masked(ptr noalias nocapture %A, ptr noalias nocapture reado
; CHECK-NEXT: li a4, 5
; CHECK-NEXT: .LBB1_1: # %vector.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetvli zero, a3, e8, m1, ta, mu
; CHECK-NEXT: vlse8.v v9, (a1), a4, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
index 1c2c90478a1f77..81671c21e15b46 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
@@ -542,6 +542,7 @@ declare <3 x double> @llvm.experimental.vp.strided.load.v3f64.p0.i32(ptr, i32, <
define <32 x double> @strided_vpload_v32f64(ptr %ptr, i32 signext %stride, <32 x i1> %m, i32 zeroext %evl) nounwind {
; CHECK-LABEL: strided_vpload_v32f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: li a4, 16
; CHECK-NEXT: mv a3, a2
@@ -598,6 +599,7 @@ declare <32 x double> @llvm.experimental.vp.strided.load.v32f64.p0.i32(ptr, i32,
define <33 x double> @strided_load_v33f64(ptr %ptr, i64 %stride, <33 x i1> %mask, i32 zeroext %evl) {
; CHECK-RV32-LABEL: strided_load_v33f64:
; CHECK-RV32: # %bb.0:
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v8, v0
; CHECK-RV32-NEXT: li a5, 32
; CHECK-RV32-NEXT: mv a3, a4
@@ -648,6 +650,7 @@ define <33 x double> @strided_load_v33f64(ptr %ptr, i64 %stride, <33 x i1> %mask
;
; CHECK-RV64-LABEL: strided_load_v33f64:
; CHECK-RV64: # %bb.0:
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v8, v0
; CHECK-RV64-NEXT: li a5, 32
; CHECK-RV64-NEXT: mv a4, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
index 12893ec55cda76..a91dee1cb245f9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
@@ -53,9 +53,9 @@ declare <128 x i7> @llvm.vp.trunc.v128i7.v128i16(<128 x i16>, <128 x i1>, i32)
define <128 x i7> @vtrunc_v128i7_v128i16(<128 x i16> %a, <128 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vtrunc_v128i7_v128i16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a1, 64
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vi v12, v0, 8
; CHECK-NEXT: mv a2, a0
; CHECK-NEXT: bltu a0, a1, .LBB4_2
@@ -231,6 +231,7 @@ define <128 x i32> @vtrunc_v128i32_v128i64(<128 x i64> %a, <128 x i1> %m, i32 ze
; CHECK-NEXT: mul a2, a2, a3
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc8, 0x00, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 72 * vlenb
+; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: li a3, 24
@@ -243,7 +244,6 @@ define <128 x i32> @vtrunc_v128i32_v128i64(<128 x i64> %a, <128 x i1> %m, i32 ze
; CHECK-NEXT: add a2, sp, a2
; CHECK-NEXT: addi a2, a2, 16
; CHECK-NEXT: vs8r.v v8, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vi v6, v0, 8
; CHECK-NEXT: addi a2, a1, 512
; CHECK-NEXT: addi a3, a1, 640
@@ -541,9 +541,9 @@ declare <32 x i32> @llvm.vp.trunc.v32i32.v32i64(<32 x i64>, <32 x i1>, i32)
define <32 x i32> @vtrunc_v32i32_v32i64(<32 x i64> %a, <32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vtrunc_v32i32_v32i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a2, 16
-; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vi v12, v0, 2
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: bltu a0, a2, .LBB17_2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
index db03dc3d5ab1e2..488f364780395b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
@@ -80,14 +80,8 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV32-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-SLOW-NEXT: vmv.x.s a0, v0
; RV32-SLOW-NEXT: andi a1, a0, 1
-; RV32-SLOW-NEXT: bnez a1, .LBB4_3
-; RV32-SLOW-NEXT: # %bb.1: # %else
-; RV32-SLOW-NEXT: andi a0, a0, 2
-; RV32-SLOW-NEXT: bnez a0, .LBB4_4
-; RV32-SLOW-NEXT: .LBB4_2: # %else2
-; RV32-SLOW-NEXT: vmv1r.v v8, v9
-; RV32-SLOW-NEXT: ret
-; RV32-SLOW-NEXT: .LBB4_3: # %cond.load
+; RV32-SLOW-NEXT: beqz a1, .LBB4_2
+; RV32-SLOW-NEXT: # %bb.1: # %cond.load
; RV32-SLOW-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV32-SLOW-NEXT: vmv.x.s a1, v8
; RV32-SLOW-NEXT: lbu a2, 1(a1)
@@ -96,9 +90,10 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV32-SLOW-NEXT: or a1, a2, a1
; RV32-SLOW-NEXT: vsetvli zero, zero, e16, m2, tu, ma
; RV32-SLOW-NEXT: vmv.s.x v9, a1
+; RV32-SLOW-NEXT: .LBB4_2: # %else
; RV32-SLOW-NEXT: andi a0, a0, 2
-; RV32-SLOW-NEXT: beqz a0, .LBB4_2
-; RV32-SLOW-NEXT: .LBB4_4: # %cond.load1
+; RV32-SLOW-NEXT: beqz a0, .LBB4_4
+; RV32-SLOW-NEXT: # %bb.3: # %cond.load1
; RV32-SLOW-NEXT: vsetivli zero, 1, e32, mf2, ta, ma
; RV32-SLOW-NEXT: vslidedown.vi v8, v8, 1
; RV32-SLOW-NEXT: vmv.x.s a0, v8
@@ -109,6 +104,8 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV32-SLOW-NEXT: vmv.s.x v8, a0
; RV32-SLOW-NEXT: vsetivli zero, 2, e16, mf4, ta, ma
; RV32-SLOW-NEXT: vslideup.vi v9, v8, 1
+; RV32-SLOW-NEXT: .LBB4_4: # %else2
+; RV32-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-SLOW-NEXT: vmv1r.v v8, v9
; RV32-SLOW-NEXT: ret
;
@@ -117,14 +114,8 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV64-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-SLOW-NEXT: vmv.x.s a0, v0
; RV64-SLOW-NEXT: andi a1, a0, 1
-; RV64-SLOW-NEXT: bnez a1, .LBB4_3
-; RV64-SLOW-NEXT: # %bb.1: # %else
-; RV64-SLOW-NEXT: andi a0, a0, 2
-; RV64-SLOW-NEXT: bnez a0, .LBB4_4
-; RV64-SLOW-NEXT: .LBB4_2: # %else2
-; RV64-SLOW-NEXT: vmv1r.v v8, v9
-; RV64-SLOW-NEXT: ret
-; RV64-SLOW-NEXT: .LBB4_3: # %cond.load
+; RV64-SLOW-NEXT: beqz a1, .LBB4_2
+; RV64-SLOW-NEXT: # %bb.1: # %cond.load
; RV64-SLOW-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-SLOW-NEXT: vmv.x.s a1, v8
; RV64-SLOW-NEXT: lbu a2, 1(a1)
@@ -133,9 +124,10 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV64-SLOW-NEXT: or a1, a2, a1
; RV64-SLOW-NEXT: vsetvli zero, zero, e16, m2, tu, ma
; RV64-SLOW-NEXT: vmv.s.x v9, a1
+; RV64-SLOW-NEXT: .LBB4_2: # %else
; RV64-SLOW-NEXT: andi a0, a0, 2
-; RV64-SLOW-NEXT: beqz a0, .LBB4_2
-; RV64-SLOW-NEXT: .LBB4_4: # %cond.load1
+; RV64-SLOW-NEXT: beqz a0, .LBB4_4
+; RV64-SLOW-NEXT: # %bb.3: # %cond.load1
; RV64-SLOW-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV64-SLOW-NEXT: vslidedown.vi v8, v8, 1
; RV64-SLOW-NEXT: vmv.x.s a0, v8
@@ -146,6 +138,8 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV64-SLOW-NEXT: vmv.s.x v8, a0
; RV64-SLOW-NEXT: vsetivli zero, 2, e16, mf4, ta, ma
; RV64-SLOW-NEXT: vslideup.vi v9, v8, 1
+; RV64-SLOW-NEXT: .LBB4_4: # %else2
+; RV64-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-SLOW-NEXT: vmv1r.v v8, v9
; RV64-SLOW-NEXT: ret
;
@@ -174,23 +168,18 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV32-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-SLOW-NEXT: vmv.x.s a0, v0
; RV32-SLOW-NEXT: andi a1, a0, 1
-; RV32-SLOW-NEXT: bnez a1, .LBB5_3
-; RV32-SLOW-NEXT: # %bb.1: # %else
-; RV32-SLOW-NEXT: andi a0, a0, 2
-; RV32-SLOW-NEXT: bnez a0, .LBB5_4
-; RV32-SLOW-NEXT: .LBB5_2: # %else2
-; RV32-SLOW-NEXT: vmv1r.v v8, v9
-; RV32-SLOW-NEXT: ret
-; RV32-SLOW-NEXT: .LBB5_3: # %cond.load
+; RV32-SLOW-NEXT: beqz a1, .LBB5_2
+; RV32-SLOW-NEXT: # %bb.1: # %cond.load
; RV32-SLOW-NEXT: vsetivli zero, 2, e32, m1, tu, ma
; RV32-SLOW-NEXT: vmv.x.s a1, v8
; RV32-SLOW-NEXT: lw a2, 0(a1)
; RV32-SLOW-NEXT: lw a1, 4(a1)
; RV32-SLOW-NEXT: vslide1down.vx v9, v9, a2
; RV32-SLOW-NEXT: vslide1down.vx v9, v9, a1
+; RV32-SLOW-NEXT: .LBB5_2: # %else
; RV32-SLOW-NEXT: andi a0, a0, 2
-; RV32-SLOW-NEXT: beqz a0, .LBB5_2
-; RV32-SLOW-NEXT: .LBB5_4: # %cond.load1
+; RV32-SLOW-NEXT: beqz a0, .LBB5_4
+; RV32-SLOW-NEXT: # %bb.3: # %cond.load1
; RV32-SLOW-NEXT: vsetivli zero, 1, e32, mf2, ta, ma
; RV32-SLOW-NEXT: vslidedown.vi v8, v8, 1
; RV32-SLOW-NEXT: vmv.x.s a0, v8
@@ -201,6 +190,8 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV32-SLOW-NEXT: vslide1down.vx v8, v8, a0
; RV32-SLOW-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; RV32-SLOW-NEXT: vslideup.vi v9, v8, 1
+; RV32-SLOW-NEXT: .LBB5_4: # %else2
+; RV32-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-SLOW-NEXT: vmv1r.v v8, v9
; RV32-SLOW-NEXT: ret
;
@@ -209,14 +200,8 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV64-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-SLOW-NEXT: vmv.x.s a0, v0
; RV64-SLOW-NEXT: andi a1, a0, 1
-; RV64-SLOW-NEXT: bnez a1, .LBB5_3
-; RV64-SLOW-NEXT: # %bb.1: # %else
-; RV64-SLOW-NEXT: andi a0, a0, 2
-; RV64-SLOW-NEXT: bnez a0, .LBB5_4
-; RV64-SLOW-NEXT: .LBB5_2: # %else2
-; RV64-SLOW-NEXT: vmv1r.v v8, v9
-; RV64-SLOW-NEXT: ret
-; RV64-SLOW-NEXT: .LBB5_3: # %cond.load
+; RV64-SLOW-NEXT: beqz a1, .LBB5_2
+; RV64-SLOW-NEXT: # %bb.1: # %cond.load
; RV64-SLOW-NEXT: vsetvli zero, zero, e64, m8, tu, ma
; RV64-SLOW-NEXT: vmv.x.s a1, v8
; RV64-SLOW-NEXT: lwu a2, 4(a1)
@@ -224,9 +209,10 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV64-SLOW-NEXT: slli a2, a2, 32
; RV64-SLOW-NEXT: or a1, a2, a1
; RV64-SLOW-NEXT: vmv.s.x v9, a1
+; RV64-SLOW-NEXT: .LBB5_2: # %else
; RV64-SLOW-NEXT: andi a0, a0, 2
-; RV64-SLOW-NEXT: beqz a0, .LBB5_2
-; RV64-SLOW-NEXT: .LBB5_4: # %cond.load1
+; RV64-SLOW-NEXT: beqz a0, .LBB5_4
+; RV64-SLOW-NEXT: # %bb.3: # %cond.load1
; RV64-SLOW-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; RV64-SLOW-NEXT: vslidedown.vi v8, v8, 1
; RV64-SLOW-NEXT: vmv.x.s a0, v8
@@ -236,6 +222,8 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV64-SLOW-NEXT: or a0, a1, a0
; RV64-SLOW-NEXT: vmv.s.x v8, a0
; RV64-SLOW-NEXT: vslideup.vi v9, v8, 1
+; RV64-SLOW-NEXT: .LBB5_4: # %else2
+; RV64-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-SLOW-NEXT: vmv1r.v v8, v9
; RV64-SLOW-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll
index 5be1a771eb2799..9e9989590e4eb5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll
@@ -363,6 +363,7 @@ declare <256 x i8> @llvm.vp.add.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vadd_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vadd_vi_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll
index ac48542ca9ebb3..5ec18aec2b9a60 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll
@@ -267,6 +267,7 @@ declare <256 x i8> @llvm.vp.smax.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vmax_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmax_vx_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll
index 794eef6ed40b21..d47e17077bf902 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll
@@ -266,6 +266,7 @@ declare <256 x i8> @llvm.vp.umax.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vmaxu_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmaxu_vx_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll
index 34011f6bd8acd3..23d292a5fd77d2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll
@@ -267,6 +267,7 @@ declare <256 x i8> @llvm.vp.smin.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vmin_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmin_vx_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll
index 79e72b7d9cac9d..b400015856f84a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll
@@ -266,6 +266,7 @@ declare <256 x i8> @llvm.vp.umin.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vminu_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vminu_vx_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpgather.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpgather.ll
index 24e75cde2ce915..df9ff0fc39a7e8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpgather.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpgather.ll
@@ -2617,8 +2617,8 @@ define <32 x double> @vpgather_baseidx_zext_v32i32_v32f64(ptr %base, <32 x i32>
define <32 x double> @vpgather_baseidx_v32f64(ptr %base, <32 x i64> %idxs, <32 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_v32f64:
; RV32: # %bb.0:
-; RV32-NEXT: vmv1r.v v7, v0
; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, ma
+; RV32-NEXT: vmv1r.v v7, v0
; RV32-NEXT: vnsrl.wi v24, v16, 0
; RV32-NEXT: vnsrl.wi v16, v8, 0
; RV32-NEXT: li a2, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll
index 71f497e4c7be48..a971b469df0a2a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll
@@ -394,6 +394,7 @@ declare <33 x double> @llvm.vp.load.v33f64.p0(ptr, <33 x i1>, i32)
define <33 x double> @vpload_v33f64(ptr %ptr, <33 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpload_v33f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: li a4, 32
; CHECK-NEXT: mv a3, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll
index a11c2b6bca12ec..a53d33e6120d55 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll
@@ -1181,9 +1181,9 @@ define <32 x double> @vpmerge_vv_v32f64(<32 x double> %va, <32 x double> %vb, <3
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: addi a1, a0, 128
-; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; CHECK-NEXT: vle64.v v24, (a1)
; CHECK-NEXT: vle64.v v8, (a0)
; CHECK-NEXT: li a1, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll
index 888fc79f0122da..ede197c11eb916 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll
@@ -372,6 +372,7 @@ declare <256 x i8> @llvm.vp.sadd.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vsadd_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsadd_vi_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll
index e1d57ae1e67414..13dc0702c16aa5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll
@@ -368,6 +368,7 @@ declare <256 x i8> @llvm.vp.uadd.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vsaddu_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsaddu_vi_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
index 1d8af4c46cc078..07b08e2518ea44 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
@@ -163,6 +163,7 @@ define <256 x i8> @select_v256i8(<256 x i1> %a, <256 x i8> %b, <256 x i8> %c, i3
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v6, v8
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: li a2, 128
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll
index 8fad3db55f9bcd..8a199770c163dc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll
@@ -384,6 +384,7 @@ declare <256 x i8> @llvm.vp.ssub.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vssub_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssub_vi_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: addi a3, a1, -128
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll
index ca35aa6c4a94c1..37c5e397569277 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll
@@ -379,6 +379,7 @@ declare <256 x i8> @llvm.vp.usub.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vssubu_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssubu_vi_v258i8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: addi a3, a1, -128
diff --git a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
index e6dfe5e78cdb4b..ff12aaac3a983d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
@@ -117,8 +117,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.floor.nxv4bf16(<vscale x 4 x bfloat>, <vs
define <vscale x 4 x bfloat> @vp_floor_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -169,8 +169,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.floor.nxv8bf16(<vscale x 8 x bfloat>, <vs
define <vscale x 8 x bfloat> @vp_floor_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -221,8 +221,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.floor.nxv16bf16(<vscale x 16 x bfloat>,
define <vscale x 16 x bfloat> @vp_floor_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -279,9 +279,9 @@ define <vscale x 32 x bfloat> @vp_floor_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -582,8 +582,8 @@ define <vscale x 4 x half> @vp_floor_nxv4f16(<vscale x 4 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vp_floor_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -649,6 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.floor.nxv8f16(<vscale x 8 x half>, <vscale
define <vscale x 8 x half> @vp_floor_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -668,8 +669,8 @@ define <vscale x 8 x half> @vp_floor_nxv8f16(<vscale x 8 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vp_floor_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -735,6 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.floor.nxv16f16(<vscale x 16 x half>, <vsca
define <vscale x 16 x half> @vp_floor_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -754,8 +756,8 @@ define <vscale x 16 x half> @vp_floor_nxv16f16(<vscale x 16 x half> %va, <vscale
;
; ZVFHMIN-LABEL: vp_floor_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -821,6 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.floor.nxv32f16(<vscale x 32 x half>, <vsca
define <vscale x 32 x half> @vp_floor_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -846,9 +849,9 @@ define <vscale x 32 x half> @vp_floor_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -1068,6 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.floor.nxv4f32(<vscale x 4 x float>, <vscal
define <vscale x 4 x float> @vp_floor_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1112,6 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.floor.nxv8f32(<vscale x 8 x float>, <vscal
define <vscale x 8 x float> @vp_floor_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1156,6 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.floor.nxv16f32(<vscale x 16 x float>, <vs
define <vscale x 16 x float> @vp_floor_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1242,6 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.floor.nxv2f64(<vscale x 2 x double>, <vsc
define <vscale x 2 x double> @vp_floor_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1286,6 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.floor.nxv4f64(<vscale x 4 x double>, <vsc
define <vscale x 4 x double> @vp_floor_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1330,6 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.floor.nxv7f64(<vscale x 7 x double>, <vsc
define <vscale x 7 x double> @vp_floor_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1374,6 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.floor.nxv8f64(<vscale x 8 x double>, <vsc
define <vscale x 8 x double> @vp_floor_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1425,13 +1435,13 @@ define <vscale x 16 x double> @vp_floor_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll
index a1cdbd4be25794..8b527fb152d681 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll
@@ -157,6 +157,7 @@ define <vscale x 32 x bfloat> @vfmax_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFH-NEXT: slli a0, a0, 1
; ZVFH-NEXT: add a0, a0, a1
; ZVFH-NEXT: sub sp, sp, a0
+; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFH-NEXT: vmv8r.v v24, v16
; ZVFH-NEXT: csrr a0, vlenb
; ZVFH-NEXT: slli a0, a0, 3
@@ -164,7 +165,6 @@ define <vscale x 32 x bfloat> @vfmax_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFH-NEXT: addi a0, a0, 16
; ZVFH-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFH-NEXT: vmv8r.v v0, v8
-; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v24
; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v0
; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -232,6 +232,7 @@ define <vscale x 32 x bfloat> @vfmax_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: slli a0, a0, 3
@@ -239,7 +240,6 @@ define <vscale x 32 x bfloat> @vfmax_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vmv8r.v v0, v8
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v24
; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v0
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -501,6 +501,7 @@ define <vscale x 32 x half> @vfmax_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: slli a0, a0, 3
@@ -508,7 +509,6 @@ define <vscale x 32 x half> @vfmax_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vmv8r.v v0, v8
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v24
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v0
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
index 33e793a691b81d..025925cbce4b32 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
@@ -19,8 +19,8 @@ declare <vscale x 1 x bfloat> @llvm.vp.maximum.nxv1bf16(<vscale x 1 x bfloat>, <
define <vscale x 1 x bfloat> @vfmax_vv_nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, mf4, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -66,8 +66,8 @@ declare <vscale x 2 x bfloat> @llvm.vp.maximum.nxv2bf16(<vscale x 2 x bfloat>, <
define <vscale x 2 x bfloat> @vfmax_vv_nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -113,8 +113,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.maximum.nxv4bf16(<vscale x 4 x bfloat>, <
define <vscale x 4 x bfloat> @vfmax_vv_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v8, v12, v12, v0.t
@@ -162,8 +162,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.maximum.nxv8bf16(<vscale x 8 x bfloat>, <
define <vscale x 8 x bfloat> @vfmax_vv_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v8, v16, v16, v0.t
@@ -217,8 +217,8 @@ define <vscale x 16 x bfloat> @vfmax_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v8, v24, v24, v0.t
@@ -569,6 +569,7 @@ declare <vscale x 1 x half> @llvm.vp.maximum.nxv1f16(<vscale x 1 x half>, <vscal
define <vscale x 1 x half> @vfmax_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv1f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -582,8 +583,8 @@ define <vscale x 1 x half> @vfmax_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmax_vv_nxv1f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -639,6 +640,7 @@ declare <vscale x 2 x half> @llvm.vp.maximum.nxv2f16(<vscale x 2 x half>, <vscal
define <vscale x 2 x half> @vfmax_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv2f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -652,8 +654,8 @@ define <vscale x 2 x half> @vfmax_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmax_vv_nxv2f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -709,6 +711,7 @@ declare <vscale x 4 x half> @llvm.vp.maximum.nxv4f16(<vscale x 4 x half>, <vscal
define <vscale x 4 x half> @vfmax_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv4f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -722,8 +725,8 @@ define <vscale x 4 x half> @vfmax_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmax_vv_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v12, v12, v0.t
@@ -781,6 +784,7 @@ declare <vscale x 8 x half> @llvm.vp.maximum.nxv8f16(<vscale x 8 x half>, <vscal
define <vscale x 8 x half> @vfmax_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -796,8 +800,8 @@ define <vscale x 8 x half> @vfmax_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmax_vv_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v16, v16, v0.t
@@ -855,6 +859,7 @@ declare <vscale x 16 x half> @llvm.vp.maximum.nxv16f16(<vscale x 16 x half>, <vs
define <vscale x 16 x half> @vfmax_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFH-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -876,8 +881,8 @@ define <vscale x 16 x half> @vfmax_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v24, v24, v0.t
@@ -964,6 +969,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: slli a1, a1, 3
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
; ZVFH-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -1280,6 +1286,7 @@ declare <vscale x 1 x float> @llvm.vp.maximum.nxv1f32(<vscale x 1 x float>, <vsc
define <vscale x 1 x float> @vfmax_vv_nxv1f32(<vscale x 1 x float> %va, <vscale x 1 x float> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1313,6 +1320,7 @@ declare <vscale x 2 x float> @llvm.vp.maximum.nxv2f32(<vscale x 2 x float>, <vsc
define <vscale x 2 x float> @vfmax_vv_nxv2f32(<vscale x 2 x float> %va, <vscale x 2 x float> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1346,6 +1354,7 @@ declare <vscale x 4 x float> @llvm.vp.maximum.nxv4f32(<vscale x 4 x float>, <vsc
define <vscale x 4 x float> @vfmax_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x float> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1381,6 +1390,7 @@ declare <vscale x 8 x float> @llvm.vp.maximum.nxv8f32(<vscale x 8 x float>, <vsc
define <vscale x 8 x float> @vfmax_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x float> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1416,6 +1426,7 @@ declare <vscale x 1 x double> @llvm.vp.maximum.nxv1f64(<vscale x 1 x double>, <v
define <vscale x 1 x double> @vfmax_vv_nxv1f64(<vscale x 1 x double> %va, <vscale x 1 x double> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1449,6 +1460,7 @@ declare <vscale x 2 x double> @llvm.vp.maximum.nxv2f64(<vscale x 2 x double>, <v
define <vscale x 2 x double> @vfmax_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x double> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1484,6 +1496,7 @@ declare <vscale x 4 x double> @llvm.vp.maximum.nxv4f64(<vscale x 4 x double>, <v
define <vscale x 4 x double> @vfmax_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x double> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1525,6 +1538,7 @@ define <vscale x 8 x double> @vfmax_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -1577,6 +1591,7 @@ define <vscale x 16 x double> @vfmax_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 24 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -1588,7 +1603,6 @@ define <vscale x 16 x double> @vfmax_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a3, a1, 3
; CHECK-NEXT: srli a4, a1, 3
-; CHECK-NEXT: vsetvli a5, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a4
; CHECK-NEXT: sub a4, a2, a1
; CHECK-NEXT: add a3, a0, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll
index d41da7b6a2af96..16c119dd87fe01 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll
@@ -157,6 +157,7 @@ define <vscale x 32 x bfloat> @vfmin_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFH-NEXT: slli a0, a0, 1
; ZVFH-NEXT: add a0, a0, a1
; ZVFH-NEXT: sub sp, sp, a0
+; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFH-NEXT: vmv8r.v v24, v16
; ZVFH-NEXT: csrr a0, vlenb
; ZVFH-NEXT: slli a0, a0, 3
@@ -164,7 +165,6 @@ define <vscale x 32 x bfloat> @vfmin_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFH-NEXT: addi a0, a0, 16
; ZVFH-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFH-NEXT: vmv8r.v v0, v8
-; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v24
; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v0
; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -232,6 +232,7 @@ define <vscale x 32 x bfloat> @vfmin_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: slli a0, a0, 3
@@ -239,7 +240,6 @@ define <vscale x 32 x bfloat> @vfmin_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vmv8r.v v0, v8
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v24
; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v0
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -501,6 +501,7 @@ define <vscale x 32 x half> @vfmin_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: slli a0, a0, 3
@@ -508,7 +509,6 @@ define <vscale x 32 x half> @vfmin_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vmv8r.v v0, v8
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v24
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v0
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
index ef6f33de4ce636..7bc8c3d2fdc57c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
@@ -19,8 +19,8 @@ declare <vscale x 1 x bfloat> @llvm.vp.minimum.nxv1bf16(<vscale x 1 x bfloat>, <
define <vscale x 1 x bfloat> @vfmin_vv_nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, mf4, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -66,8 +66,8 @@ declare <vscale x 2 x bfloat> @llvm.vp.minimum.nxv2bf16(<vscale x 2 x bfloat>, <
define <vscale x 2 x bfloat> @vfmin_vv_nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -113,8 +113,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.minimum.nxv4bf16(<vscale x 4 x bfloat>, <
define <vscale x 4 x bfloat> @vfmin_vv_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v8, v12, v12, v0.t
@@ -162,8 +162,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.minimum.nxv8bf16(<vscale x 8 x bfloat>, <
define <vscale x 8 x bfloat> @vfmin_vv_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v8, v16, v16, v0.t
@@ -217,8 +217,8 @@ define <vscale x 16 x bfloat> @vfmin_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v8, v24, v24, v0.t
@@ -569,6 +569,7 @@ declare <vscale x 1 x half> @llvm.vp.minimum.nxv1f16(<vscale x 1 x half>, <vscal
define <vscale x 1 x half> @vfmin_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv1f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -582,8 +583,8 @@ define <vscale x 1 x half> @vfmin_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmin_vv_nxv1f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -639,6 +640,7 @@ declare <vscale x 2 x half> @llvm.vp.minimum.nxv2f16(<vscale x 2 x half>, <vscal
define <vscale x 2 x half> @vfmin_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv2f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -652,8 +654,8 @@ define <vscale x 2 x half> @vfmin_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmin_vv_nxv2f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v0, v11, v11, v0.t
@@ -709,6 +711,7 @@ declare <vscale x 4 x half> @llvm.vp.minimum.nxv4f16(<vscale x 4 x half>, <vscal
define <vscale x 4 x half> @vfmin_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv4f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -722,8 +725,8 @@ define <vscale x 4 x half> @vfmin_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmin_vv_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v12, v12, v0.t
@@ -781,6 +784,7 @@ declare <vscale x 8 x half> @llvm.vp.minimum.nxv8f16(<vscale x 8 x half>, <vscal
define <vscale x 8 x half> @vfmin_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -796,8 +800,8 @@ define <vscale x 8 x half> @vfmin_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vfmin_vv_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v16, v16, v0.t
@@ -855,6 +859,7 @@ declare <vscale x 16 x half> @llvm.vp.minimum.nxv16f16(<vscale x 16 x half>, <vs
define <vscale x 16 x half> @vfmin_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFH-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -876,8 +881,8 @@ define <vscale x 16 x half> @vfmin_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; ZVFHMIN-NEXT: vmfeq.vv v8, v24, v24, v0.t
@@ -964,6 +969,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: slli a1, a1, 3
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
; ZVFH-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -1280,6 +1286,7 @@ declare <vscale x 1 x float> @llvm.vp.minimum.nxv1f32(<vscale x 1 x float>, <vsc
define <vscale x 1 x float> @vfmin_vv_nxv1f32(<vscale x 1 x float> %va, <vscale x 1 x float> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1313,6 +1320,7 @@ declare <vscale x 2 x float> @llvm.vp.minimum.nxv2f32(<vscale x 2 x float>, <vsc
define <vscale x 2 x float> @vfmin_vv_nxv2f32(<vscale x 2 x float> %va, <vscale x 2 x float> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1346,6 +1354,7 @@ declare <vscale x 4 x float> @llvm.vp.minimum.nxv4f32(<vscale x 4 x float>, <vsc
define <vscale x 4 x float> @vfmin_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x float> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1381,6 +1390,7 @@ declare <vscale x 8 x float> @llvm.vp.minimum.nxv8f32(<vscale x 8 x float>, <vsc
define <vscale x 8 x float> @vfmin_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x float> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1416,6 +1426,7 @@ declare <vscale x 1 x double> @llvm.vp.minimum.nxv1f64(<vscale x 1 x double>, <v
define <vscale x 1 x double> @vfmin_vv_nxv1f64(<vscale x 1 x double> %va, <vscale x 1 x double> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1449,6 +1460,7 @@ declare <vscale x 2 x double> @llvm.vp.minimum.nxv2f64(<vscale x 2 x double>, <v
define <vscale x 2 x double> @vfmin_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x double> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1484,6 +1496,7 @@ declare <vscale x 4 x double> @llvm.vp.minimum.nxv4f64(<vscale x 4 x double>, <v
define <vscale x 4 x double> @vfmin_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x double> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1525,6 +1538,7 @@ define <vscale x 8 x double> @vfmin_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -1577,6 +1591,7 @@ define <vscale x 16 x double> @vfmin_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 24 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -1588,7 +1603,6 @@ define <vscale x 16 x double> @vfmin_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a3, a1, 3
; CHECK-NEXT: srli a4, a1, 3
-; CHECK-NEXT: vsetvli a5, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a4
; CHECK-NEXT: sub a4, a2, a1
; CHECK-NEXT: add a3, a0, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll b/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
index 025874a1a74e2e..0514e0e6949159 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
@@ -18,8 +18,8 @@ define i32 @test(i32 %size, ptr %add.ptr, i64 %const) {
; RV32-NEXT: .LBB0_1: # %for.body
; RV32-NEXT: # =>This Inner Loop Header: Depth=1
; RV32-NEXT: th.lrb a0, a1, a0, 0
-; RV32-NEXT: vmv1r.v v9, v8
; RV32-NEXT: vsetivli zero, 8, e8, m1, tu, ma
+; RV32-NEXT: vmv1r.v v9, v8
; RV32-NEXT: vmv.s.x v9, a0
; RV32-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; RV32-NEXT: vmseq.vi v9, v9, 0
@@ -45,8 +45,8 @@ define i32 @test(i32 %size, ptr %add.ptr, i64 %const) {
; RV64-NEXT: # =>This Inner Loop Header: Depth=1
; RV64-NEXT: sext.w a0, a0
; RV64-NEXT: th.lrb a0, a1, a0, 0
-; RV64-NEXT: vmv1r.v v9, v8
; RV64-NEXT: vsetivli zero, 8, e8, m1, tu, ma
+; RV64-NEXT: vmv1r.v v9, v8
; RV64-NEXT: vmv.s.x v9, a0
; RV64-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; RV64-NEXT: vmseq.vi v9, v9, 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
index c7e3c8cb519829..2078670f330e28 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
@@ -703,6 +703,7 @@ define <vscale x 16 x i32> @fshl_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re32.v v24, (a0)
; CHECK-NEXT: li a0, 31
@@ -882,6 +883,7 @@ define <vscale x 7 x i64> @fshl_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re64.v v24, (a0)
; CHECK-NEXT: li a0, 63
@@ -953,6 +955,7 @@ define <vscale x 8 x i64> @fshl_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re64.v v24, (a0)
; CHECK-NEXT: li a0, 63
@@ -988,6 +991,7 @@ define <vscale x 16 x i64> @fshr_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x30, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 48 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: li a3, 24
@@ -1022,7 +1026,6 @@ define <vscale x 16 x i64> @fshr_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK-NEXT: add a3, sp, a3
; CHECK-NEXT: addi a3, a3, 16
; CHECK-NEXT: vs8r.v v16, (a3) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a3, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a6
; CHECK-NEXT: li a3, 63
; CHECK-NEXT: csrr a6, vlenb
@@ -1173,6 +1176,7 @@ define <vscale x 16 x i64> @fshl_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x28, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 40 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 5
@@ -1189,7 +1193,6 @@ define <vscale x 16 x i64> @fshl_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK-NEXT: slli a5, a3, 3
; CHECK-NEXT: srli a1, a3, 3
; CHECK-NEXT: sub a6, a4, a3
-; CHECK-NEXT: vsetvli a7, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a1
; CHECK-NEXT: add a1, a2, a5
; CHECK-NEXT: vl8re64.v v8, (a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll b/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll
index 967a58b45a599b..fa14a1f1d186b0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll
@@ -365,11 +365,13 @@ entry:
define <vscale x 4 x i8> @test_specify_reg_mf2(<vscale x 4 x i8> %in, <vscale x 4 x i8> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_mf2:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v2, v9
; CHECK-NEXT: vmv1r.v v1, v8
; CHECK-NEXT: #APP
; CHECK-NEXT: vadd.vv v0, v1, v2
; CHECK-NEXT: #NO_APP
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: ret
entry:
@@ -380,11 +382,13 @@ entry:
define <vscale x 8 x i8> @test_specify_reg_m1(<vscale x 8 x i8> %in, <vscale x 8 x i8> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_m1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v2, v9
; CHECK-NEXT: vmv1r.v v1, v8
; CHECK-NEXT: #APP
; CHECK-NEXT: vadd.vv v0, v1, v2
; CHECK-NEXT: #NO_APP
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: ret
entry:
@@ -395,11 +399,13 @@ entry:
define <vscale x 16 x i8> @test_specify_reg_m2(<vscale x 16 x i8> %in, <vscale x 16 x i8> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_m2:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v4, v10
; CHECK-NEXT: vmv2r.v v2, v8
; CHECK-NEXT: #APP
; CHECK-NEXT: vadd.vv v0, v2, v4
; CHECK-NEXT: #NO_APP
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v0
; CHECK-NEXT: ret
entry:
@@ -410,6 +416,7 @@ entry:
define <vscale x 1 x i1> @test_specify_reg_mask(<vscale x 1 x i1> %in, <vscale x 1 x i1> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_mask:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v2, v8
; CHECK-NEXT: vmv1r.v v1, v0
; CHECK-NEXT: #APP
diff --git a/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll
index 8925a9e0cee321..86a5a4878c6569 100644
--- a/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll
@@ -5,6 +5,7 @@
define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_0(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv4i32_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv4i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec, i64 0)
@@ -14,6 +15,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_0(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_4(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv4i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv4i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec, i64 4)
@@ -23,6 +25,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_4(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_0(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 0)
@@ -32,6 +35,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_0(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 2)
@@ -41,6 +45,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 4)
@@ -50,6 +55,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_6(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_6:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 6)
@@ -86,6 +92,7 @@ define <vscale x 4 x i8> @insert_nxv1i8_nxv4i8_3(<vscale x 4 x i8> %vec, <vscale
define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_0(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv8i32_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv8i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec, i64 0)
@@ -95,6 +102,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_0(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_8(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv8i32_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v12, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv8i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec, i64 8)
@@ -104,6 +112,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_8(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_0(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 0)
@@ -113,6 +122,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_0(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v10, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 4)
@@ -122,6 +132,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 8)
@@ -131,6 +142,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_12(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_12:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v14, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 12)
@@ -140,6 +152,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_12(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_0(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 0)
@@ -149,6 +162,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_0(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 2)
@@ -158,6 +172,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_4:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 4)
@@ -167,6 +182,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_6:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 6)
@@ -176,6 +192,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 8)
@@ -185,6 +202,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_10:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 10)
@@ -194,6 +212,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_12:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 12)
@@ -203,6 +222,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_14(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_14:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v15, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 14)
@@ -512,6 +532,7 @@ define <vscale x 2 x i64> @insert_nxv2i64_nxv3i64(<3 x i64> %sv) #0 {
define <vscale x 8 x i32> @insert_insert_combine(<2 x i32> %subvec) {
; CHECK-LABEL: insert_insert_combine:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: ret
%inner = call <vscale x 4 x i32> @llvm.vector.insert.nxv4i32.v2i32(<vscale x 4 x i32> undef, <2 x i32> %subvec, i64 0)
@@ -524,6 +545,7 @@ define <vscale x 8 x i32> @insert_insert_combine(<2 x i32> %subvec) {
define <vscale x 8 x i32> @insert_insert_combine2(<vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_insert_combine2:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: ret
%inner = call <vscale x 4 x i32> @llvm.vector.insert.nxv2i32.nxv4i32(<vscale x 4 x i32> undef, <vscale x 2 x i32> %subvec, i64 0)
diff --git a/llvm/test/CodeGen/RISCV/rvv/llrint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/llrint-vp.ll
index ffb9bf76fb4fab..166dba6a565243 100644
--- a/llvm/test/CodeGen/RISCV/rvv/llrint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/llrint-vp.ll
@@ -55,11 +55,11 @@ declare <vscale x 8 x i64> @llvm.vp.llrint.nxv8i64.nxv8f32(<vscale x 8 x float>,
define <vscale x 16 x i64> @llrint_nxv16i64_nxv16f32(<vscale x 16 x float> %x, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: llrint_nxv16i64_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/lrint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/lrint-vp.ll
index 9991bbc9725ba3..21045b69a8b5dc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/lrint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/lrint-vp.ll
@@ -117,11 +117,11 @@ define <vscale x 16 x iXLen> @lrint_nxv16f32(<vscale x 16 x float> %x, <vscale x
;
; RV64-i64-LABEL: lrint_nxv16f32:
; RV64-i64: # %bb.0:
+; RV64-i64-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; RV64-i64-NEXT: vmv1r.v v24, v0
; RV64-i64-NEXT: csrr a1, vlenb
; RV64-i64-NEXT: srli a2, a1, 3
; RV64-i64-NEXT: sub a3, a0, a1
-; RV64-i64-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; RV64-i64-NEXT: vslidedown.vx v0, v0, a2
; RV64-i64-NEXT: sltu a2, a0, a3
; RV64-i64-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll b/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
index 420597b009f33f..12bddded9a4075 100644
--- a/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
@@ -1288,6 +1288,7 @@ declare <vscale x 1 x i8> @llvm.riscv.viota.mask.nxv1i8(
define <vscale x 1 x i8> @intrinsic_viota_mask_m_nxv1i8_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_viota_mask_m_nxv1i8_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -1312,6 +1313,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsbf.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -1443,6 +1445,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsbf.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
index a3eddbcc2baed4..727908b67c6dd9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
@@ -221,11 +221,13 @@ define <vscale x 4 x i8> @mgather_truemask_nxv4i8(<vscale x 4 x ptr> %ptrs, <vsc
define <vscale x 4 x i8> @mgather_falsemask_nxv4i8(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i8> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4i8:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4i8:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x i8> @llvm.masked.gather.nxv4i8.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 1, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i8> %passthru)
@@ -442,11 +444,13 @@ define <vscale x 4 x i16> @mgather_truemask_nxv4i16(<vscale x 4 x ptr> %ptrs, <v
define <vscale x 4 x i16> @mgather_falsemask_nxv4i16(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i16> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4i16:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4i16:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x i16> @llvm.masked.gather.nxv4i16.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 2, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i16> %passthru)
@@ -686,11 +690,13 @@ define <vscale x 4 x i32> @mgather_truemask_nxv4i32(<vscale x 4 x ptr> %ptrs, <v
define <vscale x 4 x i32> @mgather_falsemask_nxv4i32(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i32> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4i32:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4i32:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x i32> @llvm.masked.gather.nxv4i32.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 4, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i32> %passthru)
@@ -949,6 +955,7 @@ define <vscale x 4 x i64> @mgather_truemask_nxv4i64(<vscale x 4 x ptr> %ptrs, <v
define <vscale x 4 x i64> @mgather_falsemask_nxv4i64(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i64> %passthru) {
; CHECK-LABEL: mgather_falsemask_nxv4i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 4 x i64> @llvm.masked.gather.nxv4i64.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 8, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i64> %passthru)
@@ -1232,12 +1239,12 @@ define void @mgather_nxv16i64(<vscale x 8 x ptr> %ptrs0, <vscale x 8 x ptr> %ptr
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: addi a3, sp, 16
; RV64-NEXT: vs8r.v v16, (a3) # Unknown-size Folded Spill
+; RV64-NEXT: vsetvli a3, zero, e8, mf4, ta, ma
; RV64-NEXT: vmv8r.v v16, v8
; RV64-NEXT: vl8re64.v v24, (a0)
; RV64-NEXT: csrr a0, vlenb
; RV64-NEXT: vl8re64.v v8, (a1)
; RV64-NEXT: srli a1, a0, 3
-; RV64-NEXT: vsetvli a3, zero, e8, mf4, ta, ma
; RV64-NEXT: vslidedown.vx v7, v0, a1
; RV64-NEXT: vsetvli a1, zero, e64, m8, ta, mu
; RV64-NEXT: vluxei64.v v24, (zero), v16, v0.t
@@ -1348,11 +1355,13 @@ define <vscale x 4 x bfloat> @mgather_truemask_nxv4bf16(<vscale x 4 x ptr> %ptrs
define <vscale x 4 x bfloat> @mgather_falsemask_nxv4bf16(<vscale x 4 x ptr> %ptrs, <vscale x 4 x bfloat> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4bf16:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4bf16:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x bfloat> @llvm.masked.gather.nxv4bf16.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 2, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x bfloat> %passthru)
@@ -1549,11 +1558,13 @@ define <vscale x 4 x half> @mgather_truemask_nxv4f16(<vscale x 4 x ptr> %ptrs, <
define <vscale x 4 x half> @mgather_falsemask_nxv4f16(<vscale x 4 x ptr> %ptrs, <vscale x 4 x half> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4f16:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4f16:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x half> @llvm.masked.gather.nxv4f16.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 2, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x half> %passthru)
@@ -1749,11 +1760,13 @@ define <vscale x 4 x float> @mgather_truemask_nxv4f32(<vscale x 4 x ptr> %ptrs,
define <vscale x 4 x float> @mgather_falsemask_nxv4f32(<vscale x 4 x ptr> %ptrs, <vscale x 4 x float> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4f32:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4f32:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x float> @llvm.masked.gather.nxv4f32.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 4, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x float> %passthru)
@@ -2012,6 +2025,7 @@ define <vscale x 4 x double> @mgather_truemask_nxv4f64(<vscale x 4 x ptr> %ptrs,
define <vscale x 4 x double> @mgather_falsemask_nxv4f64(<vscale x 4 x ptr> %ptrs, <vscale x 4 x double> %passthru) {
; CHECK-LABEL: mgather_falsemask_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 4 x double> @llvm.masked.gather.nxv4f64.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 8, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x double> %passthru)
@@ -2317,8 +2331,8 @@ define <vscale x 32 x i8> @mgather_baseidx_nxv32i8(ptr %base, <vscale x 32 x i8>
;
; RV64-LABEL: mgather_baseidx_nxv32i8:
; RV64: # %bb.0:
-; RV64-NEXT: vmv1r.v v16, v0
; RV64-NEXT: vsetvli a1, zero, e64, m8, ta, ma
+; RV64-NEXT: vmv1r.v v16, v0
; RV64-NEXT: vsext.vf8 v24, v8
; RV64-NEXT: csrr a1, vlenb
; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll
index 72c251ce985cbf..77a1f508d22184 100644
--- a/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll
@@ -2009,11 +2009,11 @@ define void @mscatter_baseidx_nxv16i16_nxv16f64(<vscale x 8 x double> %val0, <vs
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: addi a2, sp, 16
; RV32-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; RV32-NEXT: vsetvli a2, zero, e8, mf4, ta, ma
; RV32-NEXT: vmv8r.v v16, v8
; RV32-NEXT: vl4re16.v v8, (a1)
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: srli a1, a1, 3
-; RV32-NEXT: vsetvli a2, zero, e8, mf4, ta, ma
; RV32-NEXT: vslidedown.vx v7, v0, a1
; RV32-NEXT: vsetvli a1, zero, e32, m8, ta, ma
; RV32-NEXT: vsext.vf2 v24, v8
diff --git a/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll b/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll
index c49a55319a3c41..9298e8b520bd9c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll
@@ -1169,10 +1169,10 @@ define <vscale x 32 x i8> @reverse_nxv32i8(<vscale x 32 x i8> %a) {
define <vscale x 64 x i8> @reverse_nxv64i8(<vscale x 64 x i8> %a) {
; RV32-BITS-UNKNOWN-LABEL: reverse_nxv64i8:
; RV32-BITS-UNKNOWN: # %bb.0:
+; RV32-BITS-UNKNOWN-NEXT: vsetvli a0, zero, e16, m2, ta, ma
; RV32-BITS-UNKNOWN-NEXT: vmv8r.v v16, v8
; RV32-BITS-UNKNOWN-NEXT: csrr a0, vlenb
; RV32-BITS-UNKNOWN-NEXT: addi a0, a0, -1
-; RV32-BITS-UNKNOWN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
; RV32-BITS-UNKNOWN-NEXT: vid.v v8
; RV32-BITS-UNKNOWN-NEXT: vrsub.vx v24, v8, a0
; RV32-BITS-UNKNOWN-NEXT: vsetvli zero, zero, e8, m1, ta, ma
@@ -1188,10 +1188,10 @@ define <vscale x 64 x i8> @reverse_nxv64i8(<vscale x 64 x i8> %a) {
;
; RV32-BITS-256-LABEL: reverse_nxv64i8:
; RV32-BITS-256: # %bb.0:
+; RV32-BITS-256-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-BITS-256-NEXT: vmv8r.v v16, v8
; RV32-BITS-256-NEXT: csrr a0, vlenb
; RV32-BITS-256-NEXT: addi a0, a0, -1
-; RV32-BITS-256-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-BITS-256-NEXT: vid.v v8
; RV32-BITS-256-NEXT: vrsub.vx v24, v8, a0
; RV32-BITS-256-NEXT: vrgather.vv v15, v16, v24
@@ -1206,10 +1206,10 @@ define <vscale x 64 x i8> @reverse_nxv64i8(<vscale x 64 x i8> %a) {
;
; RV32-BITS-512-LABEL: reverse_nxv64i8:
; RV32-BITS-512: # %bb.0:
+; RV32-BITS-512-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-BITS-512-NEXT: vmv8r.v v16, v8
; RV32-BITS-512-NEXT: csrr a0, vlenb
; RV32-BITS-512-NEXT: addi a0, a0, -1
-; RV32-BITS-512-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-BITS-512-NEXT: vid.v v8
; RV32-BITS-512-NEXT: vrsub.vx v24, v8, a0
; RV32-BITS-512-NEXT: vrgather.vv v15, v16, v24
@@ -1224,10 +1224,10 @@ define <vscale x 64 x i8> @reverse_nxv64i8(<vscale x 64 x i8> %a) {
;
; RV64-BITS-UNKNOWN-LABEL: reverse_nxv64i8:
; RV64-BITS-UNKNOWN: # %bb.0:
+; RV64-BITS-UNKNOWN-NEXT: vsetvli a0, zero, e16, m2, ta, ma
; RV64-BITS-UNKNOWN-NEXT: vmv8r.v v16, v8
; RV64-BITS-UNKNOWN-NEXT: csrr a0, vlenb
; RV64-BITS-UNKNOWN-NEXT: addi a0, a0, -1
-; RV64-BITS-UNKNOWN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
; RV64-BITS-UNKNOWN-NEXT: vid.v v8
; RV64-BITS-UNKNOWN-NEXT: vrsub.vx v24, v8, a0
; RV64-BITS-UNKNOWN-NEXT: vsetvli zero, zero, e8, m1, ta, ma
@@ -1243,10 +1243,10 @@ define <vscale x 64 x i8> @reverse_nxv64i8(<vscale x 64 x i8> %a) {
;
; RV64-BITS-256-LABEL: reverse_nxv64i8:
; RV64-BITS-256: # %bb.0:
+; RV64-BITS-256-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-BITS-256-NEXT: vmv8r.v v16, v8
; RV64-BITS-256-NEXT: csrr a0, vlenb
; RV64-BITS-256-NEXT: addi a0, a0, -1
-; RV64-BITS-256-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-BITS-256-NEXT: vid.v v8
; RV64-BITS-256-NEXT: vrsub.vx v24, v8, a0
; RV64-BITS-256-NEXT: vrgather.vv v15, v16, v24
@@ -1261,10 +1261,10 @@ define <vscale x 64 x i8> @reverse_nxv64i8(<vscale x 64 x i8> %a) {
;
; RV64-BITS-512-LABEL: reverse_nxv64i8:
; RV64-BITS-512: # %bb.0:
+; RV64-BITS-512-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-BITS-512-NEXT: vmv8r.v v16, v8
; RV64-BITS-512-NEXT: csrr a0, vlenb
; RV64-BITS-512-NEXT: addi a0, a0, -1
-; RV64-BITS-512-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-BITS-512-NEXT: vid.v v8
; RV64-BITS-512-NEXT: vrsub.vx v24, v8, a0
; RV64-BITS-512-NEXT: vrgather.vv v15, v16, v24
@@ -1367,11 +1367,11 @@ define <vscale x 16 x i16> @reverse_nxv16i16(<vscale x 16 x i16> %a) {
define <vscale x 32 x i16> @reverse_nxv32i16(<vscale x 32 x i16> %a) {
; CHECK-LABEL: reverse_nxv32i16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 1
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
@@ -1458,11 +1458,11 @@ define <vscale x 8 x i32> @reverse_nxv8i32(<vscale x 8 x i32> %a) {
define <vscale x 16 x i32> @reverse_nxv16i32(<vscale x 16 x i32> %a) {
; CHECK-LABEL: reverse_nxv16i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 2
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
@@ -1533,11 +1533,11 @@ define <vscale x 4 x i64> @reverse_nxv4i64(<vscale x 4 x i64> %a) {
define <vscale x 8 x i64> @reverse_nxv8i64(<vscale x 8 x i64> %a) {
; CHECK-LABEL: reverse_nxv8i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 3
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e64, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
@@ -1644,11 +1644,11 @@ define <vscale x 16 x bfloat> @reverse_nxv16bf16(<vscale x 16 x bfloat> %a) {
define <vscale x 32 x bfloat> @reverse_nxv32bf16(<vscale x 32 x bfloat> %a) {
; CHECK-LABEL: reverse_nxv32bf16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 1
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
@@ -1751,11 +1751,11 @@ define <vscale x 16 x half> @reverse_nxv16f16(<vscale x 16 x half> %a) {
define <vscale x 32 x half> @reverse_nxv32f16(<vscale x 32 x half> %a) {
; CHECK-LABEL: reverse_nxv32f16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 1
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
@@ -1842,11 +1842,11 @@ define <vscale x 8 x float> @reverse_nxv8f32(<vscale x 8 x float> %a) {
define <vscale x 16 x float> @reverse_nxv16f32(<vscale x 16 x float> %a) {
; CHECK-LABEL: reverse_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 2
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e32, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
@@ -1917,11 +1917,11 @@ define <vscale x 4 x double> @reverse_nxv4f64(<vscale x 4 x double> %a) {
define <vscale x 8 x double> @reverse_nxv8f64(<vscale x 8 x double> %a) {
; CHECK-LABEL: reverse_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a0, a0, 3
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: vsetvli a1, zero, e64, m1, ta, ma
; CHECK-NEXT: vid.v v8
; CHECK-NEXT: vrsub.vx v24, v8, a0
; CHECK-NEXT: vrgather.vv v15, v16, v24
diff --git a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
index 94fce80ad3b8e4..096d9864f940a8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
@@ -117,8 +117,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.nearbyint.nxv4bf16(<vscale x 4 x bfloat>,
define <vscale x 4 x bfloat> @vp_nearbyint_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -169,8 +169,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.nearbyint.nxv8bf16(<vscale x 8 x bfloat>,
define <vscale x 8 x bfloat> @vp_nearbyint_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -221,8 +221,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.nearbyint.nxv16bf16(<vscale x 16 x bfloa
define <vscale x 16 x bfloat> @vp_nearbyint_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -273,9 +273,9 @@ declare <vscale x 32 x bfloat> @llvm.vp.nearbyint.nxv32bf16(<vscale x 32 x bfloa
define <vscale x 32 x bfloat> @vp_nearbyint_nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv32bf16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -566,8 +566,8 @@ define <vscale x 4 x half> @vp_nearbyint_nxv4f16(<vscale x 4 x half> %va, <vscal
;
; ZVFHMIN-LABEL: vp_nearbyint_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -633,6 +633,7 @@ declare <vscale x 8 x half> @llvm.vp.nearbyint.nxv8f16(<vscale x 8 x half>, <vsc
define <vscale x 8 x half> @vp_nearbyint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -652,8 +653,8 @@ define <vscale x 8 x half> @vp_nearbyint_nxv8f16(<vscale x 8 x half> %va, <vscal
;
; ZVFHMIN-LABEL: vp_nearbyint_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -719,6 +720,7 @@ declare <vscale x 16 x half> @llvm.vp.nearbyint.nxv16f16(<vscale x 16 x half>, <
define <vscale x 16 x half> @vp_nearbyint_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -738,8 +740,8 @@ define <vscale x 16 x half> @vp_nearbyint_nxv16f16(<vscale x 16 x half> %va, <vs
;
; ZVFHMIN-LABEL: vp_nearbyint_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -805,6 +807,7 @@ declare <vscale x 32 x half> @llvm.vp.nearbyint.nxv32f16(<vscale x 32 x half>, <
define <vscale x 32 x half> @vp_nearbyint_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -824,9 +827,9 @@ define <vscale x 32 x half> @vp_nearbyint_nxv32f16(<vscale x 32 x half> %va, <vs
;
; ZVFHMIN-LABEL: vp_nearbyint_nxv32f16:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -1036,6 +1039,7 @@ declare <vscale x 4 x float> @llvm.vp.nearbyint.nxv4f32(<vscale x 4 x float>, <v
define <vscale x 4 x float> @vp_nearbyint_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1080,6 +1084,7 @@ declare <vscale x 8 x float> @llvm.vp.nearbyint.nxv8f32(<vscale x 8 x float>, <v
define <vscale x 8 x float> @vp_nearbyint_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1124,6 +1129,7 @@ declare <vscale x 16 x float> @llvm.vp.nearbyint.nxv16f32(<vscale x 16 x float>,
define <vscale x 16 x float> @vp_nearbyint_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1210,6 +1216,7 @@ declare <vscale x 2 x double> @llvm.vp.nearbyint.nxv2f64(<vscale x 2 x double>,
define <vscale x 2 x double> @vp_nearbyint_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1254,6 +1261,7 @@ declare <vscale x 4 x double> @llvm.vp.nearbyint.nxv4f64(<vscale x 4 x double>,
define <vscale x 4 x double> @vp_nearbyint_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1298,6 +1306,7 @@ declare <vscale x 7 x double> @llvm.vp.nearbyint.nxv7f64(<vscale x 7 x double>,
define <vscale x 7 x double> @vp_nearbyint_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1342,6 +1351,7 @@ declare <vscale x 8 x double> @llvm.vp.nearbyint.nxv8f64(<vscale x 8 x double>,
define <vscale x 8 x double> @vp_nearbyint_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1387,13 +1397,13 @@ declare <vscale x 16 x double> @llvm.vp.nearbyint.nxv16f64(<vscale x 16 x double
define <vscale x 16 x double> @vp_nearbyint_nxv16f64(<vscale x 16 x double> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/pr88576.ll b/llvm/test/CodeGen/RISCV/rvv/pr88576.ll
index 37c67b9ff2f6af..dd7debd3ab0466 100644
--- a/llvm/test/CodeGen/RISCV/rvv/pr88576.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/pr88576.ll
@@ -23,10 +23,10 @@ define i1 @foo(<vscale x 16 x i8> %x, i64 %y) {
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: andi sp, sp, -64
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: addi a2, sp, 64
; CHECK-NEXT: slli a1, a1, 3
-; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv.v.i v16, 0
; CHECK-NEXT: add a0, a2, a0
; CHECK-NEXT: add a1, a2, a1
@@ -53,8 +53,8 @@ define i1 @foo(<vscale x 16 x i8> %x, i64 %y) {
define i8 @bar(<vscale x 128 x i1> %x, i64 %y) {
; CHECK-LABEL: bar:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v1, v8
; CHECK-NEXT: vsetivli zero, 1, e8, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v1, v8
; CHECK-NEXT: vslidedown.vx v8, v0, a0
; CHECK-NEXT: vmv.x.s a0, v8
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
index 2a69dd31118bd8..f40f828e834bec 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
@@ -109,8 +109,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.rint.nxv4bf16(<vscale x 4 x bfloat>, <vsc
define <vscale x 4 x bfloat> @vp_rint_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -157,8 +157,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.rint.nxv8bf16(<vscale x 8 x bfloat>, <vsc
define <vscale x 8 x bfloat> @vp_rint_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -205,8 +205,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.rint.nxv16bf16(<vscale x 16 x bfloat>, <
define <vscale x 16 x bfloat> @vp_rint_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -259,9 +259,9 @@ define <vscale x 32 x bfloat> @vp_rint_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -535,8 +535,8 @@ define <vscale x 4 x half> @vp_rint_nxv4f16(<vscale x 4 x half> %va, <vscale x 4
;
; ZVFHMIN-LABEL: vp_rint_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -596,6 +596,7 @@ declare <vscale x 8 x half> @llvm.vp.rint.nxv8f16(<vscale x 8 x half>, <vscale x
define <vscale x 8 x half> @vp_rint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -613,8 +614,8 @@ define <vscale x 8 x half> @vp_rint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8
;
; ZVFHMIN-LABEL: vp_rint_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -674,6 +675,7 @@ declare <vscale x 16 x half> @llvm.vp.rint.nxv16f16(<vscale x 16 x half>, <vscal
define <vscale x 16 x half> @vp_rint_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -691,8 +693,8 @@ define <vscale x 16 x half> @vp_rint_nxv16f16(<vscale x 16 x half> %va, <vscale
;
; ZVFHMIN-LABEL: vp_rint_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -752,6 +754,7 @@ declare <vscale x 32 x half> @llvm.vp.rint.nxv32f16(<vscale x 32 x half>, <vscal
define <vscale x 32 x half> @vp_rint_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -775,9 +778,9 @@ define <vscale x 32 x half> @vp_rint_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -978,6 +981,7 @@ declare <vscale x 4 x float> @llvm.vp.rint.nxv4f32(<vscale x 4 x float>, <vscale
define <vscale x 4 x float> @vp_rint_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1018,6 +1022,7 @@ declare <vscale x 8 x float> @llvm.vp.rint.nxv8f32(<vscale x 8 x float>, <vscale
define <vscale x 8 x float> @vp_rint_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1058,6 +1063,7 @@ declare <vscale x 16 x float> @llvm.vp.rint.nxv16f32(<vscale x 16 x float>, <vsc
define <vscale x 16 x float> @vp_rint_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1136,6 +1142,7 @@ declare <vscale x 2 x double> @llvm.vp.rint.nxv2f64(<vscale x 2 x double>, <vsca
define <vscale x 2 x double> @vp_rint_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1176,6 +1183,7 @@ declare <vscale x 4 x double> @llvm.vp.rint.nxv4f64(<vscale x 4 x double>, <vsca
define <vscale x 4 x double> @vp_rint_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1216,6 +1224,7 @@ declare <vscale x 7 x double> @llvm.vp.rint.nxv7f64(<vscale x 7 x double>, <vsca
define <vscale x 7 x double> @vp_rint_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1256,6 +1265,7 @@ declare <vscale x 8 x double> @llvm.vp.rint.nxv8f64(<vscale x 8 x double>, <vsca
define <vscale x 8 x double> @vp_rint_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1303,13 +1313,13 @@ define <vscale x 16 x double> @vp_rint_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
index 8a10e75333ad0a..b0bec88bd57d66 100644
--- a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
@@ -117,8 +117,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.round.nxv4bf16(<vscale x 4 x bfloat>, <vs
define <vscale x 4 x bfloat> @vp_round_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -169,8 +169,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.round.nxv8bf16(<vscale x 8 x bfloat>, <vs
define <vscale x 8 x bfloat> @vp_round_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -221,8 +221,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.round.nxv16bf16(<vscale x 16 x bfloat>,
define <vscale x 16 x bfloat> @vp_round_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -279,9 +279,9 @@ define <vscale x 32 x bfloat> @vp_round_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -582,8 +582,8 @@ define <vscale x 4 x half> @vp_round_nxv4f16(<vscale x 4 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vp_round_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -649,6 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.round.nxv8f16(<vscale x 8 x half>, <vscale
define <vscale x 8 x half> @vp_round_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -668,8 +669,8 @@ define <vscale x 8 x half> @vp_round_nxv8f16(<vscale x 8 x half> %va, <vscale x
;
; ZVFHMIN-LABEL: vp_round_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -735,6 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.round.nxv16f16(<vscale x 16 x half>, <vsca
define <vscale x 16 x half> @vp_round_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -754,8 +756,8 @@ define <vscale x 16 x half> @vp_round_nxv16f16(<vscale x 16 x half> %va, <vscale
;
; ZVFHMIN-LABEL: vp_round_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -821,6 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.round.nxv32f16(<vscale x 32 x half>, <vsca
define <vscale x 32 x half> @vp_round_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -846,9 +849,9 @@ define <vscale x 32 x half> @vp_round_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -1068,6 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.round.nxv4f32(<vscale x 4 x float>, <vscal
define <vscale x 4 x float> @vp_round_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1112,6 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.round.nxv8f32(<vscale x 8 x float>, <vscal
define <vscale x 8 x float> @vp_round_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1156,6 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.round.nxv16f32(<vscale x 16 x float>, <vs
define <vscale x 16 x float> @vp_round_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1242,6 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.round.nxv2f64(<vscale x 2 x double>, <vsc
define <vscale x 2 x double> @vp_round_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1286,6 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.round.nxv4f64(<vscale x 4 x double>, <vsc
define <vscale x 4 x double> @vp_round_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1330,6 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.round.nxv7f64(<vscale x 7 x double>, <vsc
define <vscale x 7 x double> @vp_round_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1374,6 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.round.nxv8f64(<vscale x 8 x double>, <vsc
define <vscale x 8 x double> @vp_round_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1425,13 +1435,13 @@ define <vscale x 16 x double> @vp_round_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
index 4cd909e4b0a637..a5d346cfd81a7f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
@@ -117,8 +117,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.roundeven.nxv4bf16(<vscale x 4 x bfloat>,
define <vscale x 4 x bfloat> @vp_roundeven_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -169,8 +169,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.roundeven.nxv8bf16(<vscale x 8 x bfloat>,
define <vscale x 8 x bfloat> @vp_roundeven_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -221,8 +221,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.roundeven.nxv16bf16(<vscale x 16 x bfloa
define <vscale x 16 x bfloat> @vp_roundeven_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -279,9 +279,9 @@ define <vscale x 32 x bfloat> @vp_roundeven_nxv32bf16(<vscale x 32 x bfloat> %va
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -582,8 +582,8 @@ define <vscale x 4 x half> @vp_roundeven_nxv4f16(<vscale x 4 x half> %va, <vscal
;
; ZVFHMIN-LABEL: vp_roundeven_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -649,6 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.roundeven.nxv8f16(<vscale x 8 x half>, <vsc
define <vscale x 8 x half> @vp_roundeven_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -668,8 +669,8 @@ define <vscale x 8 x half> @vp_roundeven_nxv8f16(<vscale x 8 x half> %va, <vscal
;
; ZVFHMIN-LABEL: vp_roundeven_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -735,6 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.roundeven.nxv16f16(<vscale x 16 x half>, <
define <vscale x 16 x half> @vp_roundeven_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -754,8 +756,8 @@ define <vscale x 16 x half> @vp_roundeven_nxv16f16(<vscale x 16 x half> %va, <vs
;
; ZVFHMIN-LABEL: vp_roundeven_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -821,6 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.roundeven.nxv32f16(<vscale x 32 x half>, <
define <vscale x 32 x half> @vp_roundeven_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -846,9 +849,9 @@ define <vscale x 32 x half> @vp_roundeven_nxv32f16(<vscale x 32 x half> %va, <vs
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -1068,6 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.roundeven.nxv4f32(<vscale x 4 x float>, <v
define <vscale x 4 x float> @vp_roundeven_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1112,6 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.roundeven.nxv8f32(<vscale x 8 x float>, <v
define <vscale x 8 x float> @vp_roundeven_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1156,6 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.roundeven.nxv16f32(<vscale x 16 x float>,
define <vscale x 16 x float> @vp_roundeven_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1242,6 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.roundeven.nxv2f64(<vscale x 2 x double>,
define <vscale x 2 x double> @vp_roundeven_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1286,6 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.roundeven.nxv4f64(<vscale x 4 x double>,
define <vscale x 4 x double> @vp_roundeven_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1330,6 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.roundeven.nxv7f64(<vscale x 7 x double>,
define <vscale x 7 x double> @vp_roundeven_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1374,6 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.roundeven.nxv8f64(<vscale x 8 x double>,
define <vscale x 8 x double> @vp_roundeven_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1425,13 +1435,13 @@ define <vscale x 16 x double> @vp_roundeven_nxv16f64(<vscale x 16 x double> %va,
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
index 96c821a76ae84e..13fdc11145001e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
@@ -117,8 +117,8 @@ declare <vscale x 4 x bfloat> @llvm.vp.roundtozero.nxv4bf16(<vscale x 4 x bfloat
define <vscale x 4 x bfloat> @vp_roundtozero_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -169,8 +169,8 @@ declare <vscale x 8 x bfloat> @llvm.vp.roundtozero.nxv8bf16(<vscale x 8 x bfloat
define <vscale x 8 x bfloat> @vp_roundtozero_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -221,8 +221,8 @@ declare <vscale x 16 x bfloat> @llvm.vp.roundtozero.nxv16bf16(<vscale x 16 x bfl
define <vscale x 16 x bfloat> @vp_roundtozero_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv16bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: lui a1, 307200
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -279,9 +279,9 @@ define <vscale x 32 x bfloat> @vp_roundtozero_nxv32bf16(<vscale x 32 x bfloat> %
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12
; CHECK-NEXT: lui a3, 307200
; CHECK-NEXT: slli a1, a2, 1
@@ -582,8 +582,8 @@ define <vscale x 4 x half> @vp_roundtozero_nxv4f16(<vscale x 4 x half> %va, <vsc
;
; ZVFHMIN-LABEL: vp_roundtozero_nxv4f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v9, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m2, ta, ma
@@ -649,6 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.roundtozero.nxv8f16(<vscale x 8 x half>, <v
define <vscale x 8 x half> @vp_roundtozero_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv8f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -668,8 +669,8 @@ define <vscale x 8 x half> @vp_roundtozero_nxv8f16(<vscale x 8 x half> %va, <vsc
;
; ZVFHMIN-LABEL: vp_roundtozero_nxv8f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v10, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m4, ta, ma
@@ -735,6 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.roundtozero.nxv16f16(<vscale x 16 x half>,
define <vscale x 16 x half> @vp_roundtozero_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv16f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -754,8 +756,8 @@ define <vscale x 16 x half> @vp_roundtozero_nxv16f16(<vscale x 16 x half> %va, <
;
; ZVFHMIN-LABEL: vp_roundtozero_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv1r.v v12, v0
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: lui a1, 307200
; ZVFHMIN-NEXT: vsetvli zero, a0, e32, m8, ta, ma
@@ -821,6 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.roundtozero.nxv32f16(<vscale x 32 x half>,
define <vscale x 32 x half> @vp_roundtozero_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv32f16:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -846,9 +849,9 @@ define <vscale x 32 x half> @vp_roundtozero_nxv32f16(<vscale x 32 x half> %va, <
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
; ZVFHMIN-NEXT: lui a3, 307200
; ZVFHMIN-NEXT: slli a1, a2, 1
@@ -1068,6 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.roundtozero.nxv4f32(<vscale x 4 x float>,
define <vscale x 4 x float> @vp_roundtozero_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1112,6 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.roundtozero.nxv8f32(<vscale x 8 x float>,
define <vscale x 8 x float> @vp_roundtozero_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1156,6 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.roundtozero.nxv16f32(<vscale x 16 x float
define <vscale x 16 x float> @vp_roundtozero_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv16f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1242,6 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.roundtozero.nxv2f64(<vscale x 2 x double>
define <vscale x 2 x double> @vp_roundtozero_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1286,6 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.roundtozero.nxv4f64(<vscale x 4 x double>
define <vscale x 4 x double> @vp_roundtozero_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1330,6 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.roundtozero.nxv7f64(<vscale x 7 x double>
define <vscale x 7 x double> @vp_roundtozero_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv7f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1374,6 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.roundtozero.nxv8f64(<vscale x 8 x double>
define <vscale x 8 x double> @vp_roundtozero_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
@@ -1425,13 +1435,13 @@ define <vscale x 16 x double> @vp_roundtozero_nxv16f64(<vscale x 16 x double> %v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI44_0)
; CHECK-NEXT: srli a3, a1, 3
; CHECK-NEXT: fld fa5, %lo(.LCPI44_0)(a2)
; CHECK-NEXT: sub a2, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v6, v0, a3
; CHECK-NEXT: sltu a3, a0, a2
; CHECK-NEXT: addi a3, a3, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
index aef160049106b9..30180e4c41218e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
@@ -17,6 +17,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: slli a1, a1, 1
; SPILL-O0-NEXT: sub sp, sp, a1
; SPILL-O0-NEXT: sw a0, 8(sp) # 4-byte Folded Spill
+; SPILL-O0-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; SPILL-O0-NEXT: vmv1r.v v10, v9
; SPILL-O0-NEXT: vmv1r.v v9, v8
; SPILL-O0-NEXT: csrr a1, vlenb
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
index b34952b64f09e2..adb15f02e33a45 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
@@ -15,6 +15,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
; SPILL-O0-NEXT: vmv1r.v v8, v9
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -90,6 +91,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
; SPILL-O0-NEXT: vmv1r.v v8, v9
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -166,6 +168,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vmv2r.v v8, v10
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
@@ -246,6 +249,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8m4_v12m4
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
; SPILL-O0-NEXT: vmv4r.v v8, v12
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs4r.v v8, (a0) # Unknown-size Folded Spill
@@ -326,6 +330,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2_v12m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vlseg3e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vmv2r.v v8, v10
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
index c7c44fb0e12158..a7a134db93b0eb 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
@@ -20,6 +20,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: slli a1, a1, 1
; SPILL-O0-NEXT: sub sp, sp, a1
; SPILL-O0-NEXT: sd a0, 16(sp) # 8-byte Folded Spill
+; SPILL-O0-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; SPILL-O0-NEXT: vmv1r.v v10, v9
; SPILL-O0-NEXT: vmv1r.v v9, v8
; SPILL-O0-NEXT: csrr a1, vlenb
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll
index 361adb55ef12f1..ff0f1d7748668d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll
@@ -15,6 +15,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
; SPILL-O0-NEXT: vmv1r.v v8, v9
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -90,6 +91,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
; SPILL-O0-NEXT: vmv1r.v v8, v9
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -166,6 +168,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vmv2r.v v8, v10
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
@@ -246,6 +249,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8m4_v12m4
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
; SPILL-O0-NEXT: vmv4r.v v8, v12
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs4r.v v8, (a0) # Unknown-size Folded Spill
@@ -326,6 +330,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2_v12m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vlseg3e32.v v8, (a0)
+; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vmv2r.v v8, v10
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
index b27ba14e85c839..2e3f7c9bc2a6ba 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
@@ -47,6 +47,7 @@ define <vscale x 16 x i32> @foo(i32 %0, i32 %1, i32 %2, i32 %3, i32 %4, i32 %5,
; CHECK-NEXT: vs8r.v v8, (t1)
; CHECK-NEXT: sd t1, 0(sp)
; CHECK-NEXT: sd t0, 8(sp)
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: call bar
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
index 23ebfade6f6b0f..ebafaed09929a5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
@@ -941,6 +941,7 @@ declare <vscale x 2 x i32> @llvm.riscv.vredsum.nxv2i32.nxv2i32(
define <vscale x 2 x i32> @vredsum(<vscale x 2 x i32> %passthru, <vscale x 2 x i32> %x, <vscale x 2 x i32> %y, <vscale x 2 x i1> %m, i64 %vl) {
; CHECK-LABEL: vredsum:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, tu, ma
; CHECK-NEXT: vredsum.vs v11, v9, v10
@@ -965,6 +966,7 @@ define <vscale x 2 x float> @vfredusum(<vscale x 2 x float> %passthru, <vscale x
; CHECK-LABEL: vfredusum:
; CHECK: # %bb.0:
; CHECK-NEXT: fsrmi a1, 0
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, tu, ma
; CHECK-NEXT: vfredusum.vs v11, v9, v10
@@ -1016,8 +1018,8 @@ define <vscale x 2 x float> @vfredusum_allones_mask(<vscale x 2 x float> %passth
define <vscale x 2 x i32> @unfoldable_vredsum_allones_mask_diff_vl(<vscale x 2 x i32> %passthru, <vscale x 2 x i32> %x, <vscale x 2 x i32> %y) {
; CHECK-LABEL: unfoldable_vredsum_allones_mask_diff_vl:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli a0, zero, e32, m1, tu, ma
+; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vredsum.vs v11, v9, v10
; CHECK-NEXT: vsetivli zero, 1, e32, m1, tu, ma
; CHECK-NEXT: vmv.v.v v8, v11
diff --git a/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll
index 6c11e9413525e0..70b53841bff4c2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll
@@ -1473,6 +1473,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64bf16(<vscale x 64 x bfloat> %va, <vs
; CHECK-NEXT: add a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: mv a3, a1
@@ -1504,7 +1505,6 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64bf16(<vscale x 64 x bfloat> %va, <vs
; CHECK-NEXT: add t0, sp, t0
; CHECK-NEXT: addi t0, t0, 16
; CHECK-NEXT: vs1r.v v24, (t0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli t0, zero, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vx v25, v24, a1
; CHECK-NEXT: vsetvli t0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v24, v25, a3
@@ -3720,6 +3720,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFH-NEXT: slli a1, a1, 4
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; ZVFH-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v24, v0
; ZVFH-NEXT: csrr a1, vlenb
; ZVFH-NEXT: slli a1, a1, 3
@@ -3738,7 +3739,6 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFH-NEXT: vl8re16.v v0, (a0)
; ZVFH-NEXT: addi a0, sp, 16
; ZVFH-NEXT: vs8r.v v0, (a0) # Unknown-size Folded Spill
-; ZVFH-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFH-NEXT: vslidedown.vx v0, v24, a1
; ZVFH-NEXT: and a4, a4, a5
; ZVFH-NEXT: vsetvli zero, a4, e16, m8, ta, ma
@@ -3781,6 +3781,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFHMIN-NEXT: add a1, a1, a3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v24, v0
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: mv a3, a1
@@ -3812,7 +3813,6 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFHMIN-NEXT: add t0, sp, t0
; ZVFHMIN-NEXT: addi t0, t0, 16
; ZVFHMIN-NEXT: vs1r.v v24, (t0) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli t0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: vslidedown.vx v25, v24, a1
; ZVFHMIN-NEXT: vsetvli t0, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vslidedown.vx v24, v25, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
index e8099c2d08a5f8..2392cb77c753dc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
@@ -1092,6 +1092,7 @@ define <vscale x 128 x i1> @icmp_eq_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -1099,7 +1100,6 @@ define <vscale x 128 x i1> @icmp_eq_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: vsetvli a4, zero, e8, m8, ta, ma
; CHECK-NEXT: vlm.v v0, (a2)
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: add a2, a0, a1
@@ -1143,8 +1143,8 @@ define <vscale x 128 x i1> @icmp_eq_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
define <vscale x 128 x i1> @icmp_eq_vx_nxv128i8(<vscale x 128 x i8> %va, i8 %b, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -1173,8 +1173,8 @@ define <vscale x 128 x i1> @icmp_eq_vx_nxv128i8(<vscale x 128 x i8> %va, i8 %b,
define <vscale x 128 x i1> @icmp_eq_vx_swap_nxv128i8(<vscale x 128 x i8> %va, i8 %b, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_swap_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -2244,6 +2244,7 @@ define <vscale x 32 x i1> @icmp_eq_vv_nxv32i32(<vscale x 32 x i32> %va, <vscale
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -2262,7 +2263,6 @@ define <vscale x 32 x i1> @icmp_eq_vv_nxv32i32(<vscale x 32 x i32> %va, <vscale
; CHECK-NEXT: vl8re32.v v0, (a0)
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs8r.v v0, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v24, a1
; CHECK-NEXT: and a4, a4, a5
; CHECK-NEXT: vsetvli zero, a4, e32, m8, ta, ma
@@ -2299,11 +2299,11 @@ define <vscale x 32 x i1> @icmp_eq_vv_nxv32i32(<vscale x 32 x i32> %va, <vscale
define <vscale x 32 x i1> @icmp_eq_vx_nxv32i32(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a3, vlenb
; CHECK-NEXT: srli a2, a3, 2
; CHECK-NEXT: slli a3, a3, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a4, a1, a3
; CHECK-NEXT: sltu a5, a1, a4
@@ -2332,11 +2332,11 @@ define <vscale x 32 x i1> @icmp_eq_vx_nxv32i32(<vscale x 32 x i32> %va, i32 %b,
define <vscale x 32 x i1> @icmp_eq_vx_swap_nxv32i32(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_swap_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a3, vlenb
; CHECK-NEXT: srli a2, a3, 2
; CHECK-NEXT: slli a3, a3, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a4, a1, a3
; CHECK-NEXT: sltu a5, a1, a4
diff --git a/llvm/test/CodeGen/RISCV/rvv/sink-splat-operands.ll b/llvm/test/CodeGen/RISCV/rvv/sink-splat-operands.ll
index 197ba085c03599..75a5eea1cb409e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/sink-splat-operands.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/sink-splat-operands.ll
@@ -4865,10 +4865,10 @@ declare <4 x i1> @llvm.vp.icmp.v4i32(<4 x i32>, <4 x i32>, metadata, <4 x i1>, i
define void @sink_splat_vp_icmp(ptr nocapture %x, i32 signext %y, <4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: sink_splat_vp_icmp:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: lui a3, 1
; CHECK-NEXT: add a3, a0, a3
-; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; CHECK-NEXT: vmv.v.i v9, 0
; CHECK-NEXT: .LBB102_1: # %vector.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
@@ -4906,10 +4906,10 @@ declare <4 x i1> @llvm.vp.fcmp.v4f32(<4 x float>, <4 x float>, metadata, <4 x i1
define void @sink_splat_vp_fcmp(ptr nocapture %x, float %y, <4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: sink_splat_vp_fcmp:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: lui a2, 1
; CHECK-NEXT: add a2, a0, a2
-; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; CHECK-NEXT: vmv.v.i v9, 0
; CHECK-NEXT: .LBB103_1: # %vector.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
index f8315de324e42b..e1cd2dade3124f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
@@ -663,6 +663,7 @@ declare <vscale x 3 x double> @llvm.experimental.vp.strided.load.nxv3f64.p0.i32(
define <vscale x 16 x double> @strided_load_nxv16f64(ptr %ptr, i64 %stride, <vscale x 16 x i1> %mask, i32 zeroext %evl) {
; CHECK-RV32-LABEL: strided_load_nxv16f64:
; CHECK-RV32: # %bb.0:
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v9, v0
; CHECK-RV32-NEXT: csrr a4, vlenb
; CHECK-RV32-NEXT: sub a2, a3, a4
@@ -688,6 +689,7 @@ define <vscale x 16 x double> @strided_load_nxv16f64(ptr %ptr, i64 %stride, <vsc
;
; CHECK-RV64-LABEL: strided_load_nxv16f64:
; CHECK-RV64: # %bb.0:
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v9, v0
; CHECK-RV64-NEXT: csrr a4, vlenb
; CHECK-RV64-NEXT: sub a3, a2, a4
@@ -765,6 +767,7 @@ declare <vscale x 16 x double> @llvm.experimental.vp.strided.load.nxv16f64.p0.i6
define <vscale x 16 x double> @strided_load_nxv17f64(ptr %ptr, i64 %stride, <vscale x 17 x i1> %mask, i32 zeroext %evl, ptr %hi_ptr) {
; CHECK-RV32-LABEL: strided_load_nxv17f64:
; CHECK-RV32: # %bb.0:
+; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v8, v0
; CHECK-RV32-NEXT: csrr a2, vlenb
; CHECK-RV32-NEXT: slli a7, a2, 1
@@ -812,6 +815,7 @@ define <vscale x 16 x double> @strided_load_nxv17f64(ptr %ptr, i64 %stride, <vsc
;
; CHECK-RV64-LABEL: strided_load_nxv17f64:
; CHECK-RV64: # %bb.0:
+; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v8, v0
; CHECK-RV64-NEXT: csrr a4, vlenb
; CHECK-RV64-NEXT: slli a7, a4, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll b/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
index 98ec99bcfea33e..c523a6c748ce1f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
@@ -615,6 +615,7 @@ define void @strided_store_nxv17f64(<vscale x 17 x double> %v, ptr %ptr, i32 sig
; CHECK-NEXT: slli a4, a4, 3
; CHECK-NEXT: sub sp, sp, a4
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: addi a4, sp, 16
; CHECK-NEXT: vs8r.v v16, (a4) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/undef-earlyclobber-chain.ll b/llvm/test/CodeGen/RISCV/rvv/undef-earlyclobber-chain.ll
index ab13c78da05e87..c9f9a79733003e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/undef-earlyclobber-chain.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/undef-earlyclobber-chain.ll
@@ -158,8 +158,8 @@ declare <vscale x 8 x i8> @llvm.riscv.vrgatherei16.vv.nxv8i8.i64(<vscale x 8 x i
define void @repeat_shuffle(<2 x double> %v, ptr noalias %q) {
; CHECK-LABEL: repeat_shuffle:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv2r.v v10, v8
; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
+; CHECK-NEXT: vmv2r.v v10, v8
; CHECK-NEXT: vslideup.vi v10, v8, 2
; CHECK-NEXT: vse64.v v10, (a0)
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vadd-vp.ll
index ebd550013ec78f..fee6799e992f31 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vadd-vp.ll
@@ -565,8 +565,8 @@ declare <vscale x 128 x i8> @llvm.vp.add.nxv128i8(<vscale x 128 x i8>, <vscale x
define <vscale x 128 x i8> @vadd_vi_nxv128i8(<vscale x 128 x i8> %va, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vadd_vi_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a0)
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
@@ -1343,11 +1343,11 @@ declare <vscale x 32 x i32> @llvm.vp.add.nxv32i32(<vscale x 32 x i32>, <vscale x
define <vscale x 32 x i32> @vadd_vi_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vadd_vi_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
@@ -1399,11 +1399,11 @@ declare i32 @llvm.vscale.i32()
define <vscale x 32 x i32> @vadd_vi_nxv32i32_evl_nx8(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m) {
; CHECK-LABEL: vadd_vi_nxv32i32_evl_nx8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: srli a2, a0, 2
; CHECK-NEXT: slli a1, a0, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vcpop.ll b/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
index e59a9174b03d94..dbe0780ce9bb89 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
@@ -43,6 +43,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv1i1(
define iXLen @intrinsic_vcpop_mask_m_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -97,6 +98,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv2i1(
define iXLen @intrinsic_vcpop_mask_m_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv2i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -137,6 +139,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv4i1(
define iXLen @intrinsic_vcpop_mask_m_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv4i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -177,6 +180,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv8i1(
define iXLen @intrinsic_vcpop_mask_m_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv8i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -217,6 +221,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv16i1(
define iXLen @intrinsic_vcpop_mask_m_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv16i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -257,6 +262,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv32i1(
define iXLen @intrinsic_vcpop_mask_m_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv32i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -297,6 +303,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv64i1(
define iXLen @intrinsic_vcpop_mask_m_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv64i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-fixed.ll b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-fixed.ll
index 6de846b2582dac..2fc5b40a89afad 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-fixed.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-fixed.ll
@@ -7,8 +7,8 @@
define {<16 x i1>, <16 x i1>} @vector_deinterleave_v16i1_v32i1(<32 x i1> %vec) {
; CHECK-LABEL: vector_deinterleave_v16i1_v32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: vsetivli zero, 2, e8, mf4, ta, ma
+; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: vslidedown.vi v0, v0, 2
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll
index 99743066c79a82..2291475cef0140 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll
@@ -133,8 +133,8 @@ ret {<vscale x 64 x i1>, <vscale x 64 x i1>} %retval
define {<vscale x 64 x i8>, <vscale x 64 x i8>} @vector_deinterleave_nxv64i8_nxv128i8(<vscale x 128 x i8> %vec) {
; CHECK-LABEL: vector_deinterleave_nxv64i8_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vnsrl.wi v8, v24, 0
; CHECK-NEXT: vnsrl.wi v0, v24, 8
; CHECK-NEXT: vnsrl.wi v12, v16, 0
@@ -148,8 +148,8 @@ ret {<vscale x 64 x i8>, <vscale x 64 x i8>} %retval
define {<vscale x 32 x i16>, <vscale x 32 x i16>} @vector_deinterleave_nxv32i16_nxv64i16(<vscale x 64 x i16> %vec) {
; CHECK-LABEL: vector_deinterleave_nxv32i16_nxv64i16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vnsrl.wi v8, v24, 0
; CHECK-NEXT: vnsrl.wi v0, v24, 16
; CHECK-NEXT: vnsrl.wi v12, v16, 0
@@ -163,9 +163,9 @@ ret {<vscale x 32 x i16>, <vscale x 32 x i16>} %retval
define {<vscale x 16 x i32>, <vscale x 16 x i32>} @vector_deinterleave_nxv16i32_nxvv32i32(<vscale x 32 x i32> %vec) {
; CHECK-LABEL: vector_deinterleave_nxv16i32_nxvv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m4, ta, ma
; CHECK-NEXT: vmv8r.v v24, v16
; CHECK-NEXT: li a0, 32
-; CHECK-NEXT: vsetvli a1, zero, e32, m4, ta, ma
; CHECK-NEXT: vnsrl.wx v20, v24, a0
; CHECK-NEXT: vnsrl.wx v16, v8, a0
; CHECK-NEXT: vnsrl.wi v0, v8, 0
@@ -374,8 +374,8 @@ declare {<vscale x 2 x double>, <vscale x 2 x double>} @llvm.vector.deinterleave
define {<vscale x 32 x bfloat>, <vscale x 32 x bfloat>} @vector_deinterleave_nxv32bf16_nxv64bf16(<vscale x 64 x bfloat> %vec) {
; CHECK-LABEL: vector_deinterleave_nxv32bf16_nxv64bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vnsrl.wi v8, v24, 0
; CHECK-NEXT: vnsrl.wi v0, v24, 16
; CHECK-NEXT: vnsrl.wi v12, v16, 0
@@ -389,8 +389,8 @@ ret {<vscale x 32 x bfloat>, <vscale x 32 x bfloat>} %retval
define {<vscale x 32 x half>, <vscale x 32 x half>} @vector_deinterleave_nxv32f16_nxv64f16(<vscale x 64 x half> %vec) {
; CHECK-LABEL: vector_deinterleave_nxv32f16_nxv64f16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vnsrl.wi v8, v24, 0
; CHECK-NEXT: vnsrl.wi v0, v24, 16
; CHECK-NEXT: vnsrl.wi v12, v16, 0
@@ -404,9 +404,9 @@ ret {<vscale x 32 x half>, <vscale x 32 x half>} %retval
define {<vscale x 16 x float>, <vscale x 16 x float>} @vector_deinterleave_nxv16f32_nxv32f32(<vscale x 32 x float> %vec) {
; CHECK-LABEL: vector_deinterleave_nxv16f32_nxv32f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e32, m4, ta, ma
; CHECK-NEXT: vmv8r.v v24, v16
; CHECK-NEXT: li a0, 32
-; CHECK-NEXT: vsetvli a1, zero, e32, m4, ta, ma
; CHECK-NEXT: vnsrl.wx v20, v24, a0
; CHECK-NEXT: vnsrl.wx v16, v8, a0
; CHECK-NEXT: vnsrl.wi v0, v8, 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-interleave-fixed.ll b/llvm/test/CodeGen/RISCV/rvv/vector-interleave-fixed.ll
index 7b0ac01918b9bd..08aa02c7e869a1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-interleave-fixed.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-interleave-fixed.ll
@@ -91,10 +91,10 @@ define <8 x i32> @vector_interleave_v8i32_v4i32(<4 x i32> %a, <4 x i32> %b) {
define <4 x i64> @vector_interleave_v4i64_v2i64(<2 x i64> %a, <2 x i64> %b) {
; CHECK-LABEL: vector_interleave_v4i64_v2i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: lui a0, 12304
; CHECK-NEXT: addi a0, a0, 512
-; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-NEXT: vslideup.vi v8, v10, 2
; CHECK-NEXT: vmv.s.x v10, a0
; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
@@ -106,10 +106,10 @@ define <4 x i64> @vector_interleave_v4i64_v2i64(<2 x i64> %a, <2 x i64> %b) {
;
; ZVBB-LABEL: vector_interleave_v4i64_v2i64:
; ZVBB: # %bb.0:
+; ZVBB-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; ZVBB-NEXT: vmv1r.v v10, v9
; ZVBB-NEXT: lui a0, 12304
; ZVBB-NEXT: addi a0, a0, 512
-; ZVBB-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; ZVBB-NEXT: vslideup.vi v8, v10, 2
; ZVBB-NEXT: vmv.s.x v10, a0
; ZVBB-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
@@ -239,10 +239,10 @@ define <8 x float> @vector_interleave_v8f32_v4f32(<4 x float> %a, <4 x float> %b
define <4 x double> @vector_interleave_v4f64_v2f64(<2 x double> %a, <2 x double> %b) {
; CHECK-LABEL: vector_interleave_v4f64_v2f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: lui a0, 12304
; CHECK-NEXT: addi a0, a0, 512
-; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-NEXT: vslideup.vi v8, v10, 2
; CHECK-NEXT: vmv.s.x v10, a0
; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
@@ -254,10 +254,10 @@ define <4 x double> @vector_interleave_v4f64_v2f64(<2 x double> %a, <2 x double>
;
; ZVBB-LABEL: vector_interleave_v4f64_v2f64:
; ZVBB: # %bb.0:
+; ZVBB-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; ZVBB-NEXT: vmv1r.v v10, v9
; ZVBB-NEXT: lui a0, 12304
; ZVBB-NEXT: addi a0, a0, 512
-; ZVBB-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; ZVBB-NEXT: vslideup.vi v8, v10, 2
; ZVBB-NEXT: vmv.s.x v10, a0
; ZVBB-NEXT: vsetvli zero, zero, e16, mf2, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll b/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll
index bc203e215d8786..9b78f31d399d9a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll
@@ -9,9 +9,9 @@
define void @vector_interleave_store_nxv32i1_nxv16i1(<vscale x 16 x i1> %a, <vscale x 16 x i1> %b, ptr %p) {
; CHECK-LABEL: vector_interleave_store_nxv32i1_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: li a1, -1
; CHECK-NEXT: csrr a2, vlenb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll
index 26e9afcb1d109b..864acb320d8fe1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll
@@ -11,9 +11,9 @@
define <vscale x 32 x i1> @vector_interleave_nxv32i1_nxv16i1(<vscale x 16 x i1> %a, <vscale x 16 x i1> %b) {
; CHECK-LABEL: vector_interleave_nxv32i1_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: csrr a1, vlenb
@@ -32,9 +32,9 @@ define <vscale x 32 x i1> @vector_interleave_nxv32i1_nxv16i1(<vscale x 16 x i1>
;
; ZVBB-LABEL: vector_interleave_nxv32i1_nxv16i1:
; ZVBB: # %bb.0:
+; ZVBB-NEXT: vsetvli a0, zero, e8, m2, ta, mu
; ZVBB-NEXT: vmv1r.v v9, v0
; ZVBB-NEXT: vmv1r.v v0, v8
-; ZVBB-NEXT: vsetvli a0, zero, e8, m2, ta, mu
; ZVBB-NEXT: vmv.v.i v10, 0
; ZVBB-NEXT: li a0, 1
; ZVBB-NEXT: csrr a1, vlenb
@@ -160,9 +160,9 @@ declare <vscale x 4 x i64> @llvm.vector.interleave2.nxv4i64(<vscale x 2 x i64>,
define <vscale x 128 x i1> @vector_interleave_nxv128i1_nxv64i1(<vscale x 64 x i1> %a, <vscale x 64 x i1> %b) {
; CHECK-LABEL: vector_interleave_nxv128i1_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv.v.i v24, 0
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vmerge.vim v16, v24, 1, v0
@@ -203,8 +203,8 @@ define <vscale x 128 x i1> @vector_interleave_nxv128i1_nxv64i1(<vscale x 64 x i1
define <vscale x 128 x i8> @vector_interleave_nxv128i8_nxv64i8(<vscale x 64 x i8> %a, <vscale x 64 x i8> %b) {
; CHECK-LABEL: vector_interleave_nxv128i8_nxv64i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vwaddu.vv v8, v24, v16
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vwaddu.vv v0, v28, v20
@@ -215,8 +215,8 @@ define <vscale x 128 x i8> @vector_interleave_nxv128i8_nxv64i8(<vscale x 64 x i8
;
; ZVBB-LABEL: vector_interleave_nxv128i8_nxv64i8:
; ZVBB: # %bb.0:
-; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vwsll.vi v8, v16, 8
; ZVBB-NEXT: vwsll.vi v0, v20, 8
; ZVBB-NEXT: vwaddu.wv v8, v8, v24
@@ -230,8 +230,8 @@ define <vscale x 128 x i8> @vector_interleave_nxv128i8_nxv64i8(<vscale x 64 x i8
define <vscale x 64 x i16> @vector_interleave_nxv64i16_nxv32i16(<vscale x 32 x i16> %a, <vscale x 32 x i16> %b) {
; CHECK-LABEL: vector_interleave_nxv64i16_nxv32i16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vwaddu.vv v8, v24, v16
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vwaddu.vv v0, v28, v20
@@ -242,8 +242,8 @@ define <vscale x 64 x i16> @vector_interleave_nxv64i16_nxv32i16(<vscale x 32 x i
;
; ZVBB-LABEL: vector_interleave_nxv64i16_nxv32i16:
; ZVBB: # %bb.0:
-; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vwsll.vi v8, v16, 16
; ZVBB-NEXT: vwsll.vi v0, v20, 16
; ZVBB-NEXT: vwaddu.wv v8, v8, v24
@@ -257,8 +257,8 @@ define <vscale x 64 x i16> @vector_interleave_nxv64i16_nxv32i16(<vscale x 32 x i
define <vscale x 32 x i32> @vector_interleave_nxv32i32_nxv16i32(<vscale x 16 x i32> %a, <vscale x 16 x i32> %b) {
; CHECK-LABEL: vector_interleave_nxv32i32_nxv16i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e32, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vwaddu.vv v8, v24, v16
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vwaddu.vv v0, v28, v20
@@ -269,9 +269,9 @@ define <vscale x 32 x i32> @vector_interleave_nxv32i32_nxv16i32(<vscale x 16 x i
;
; ZVBB-LABEL: vector_interleave_nxv32i32_nxv16i32:
; ZVBB: # %bb.0:
+; ZVBB-NEXT: vsetvli a0, zero, e32, m4, ta, ma
; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: li a0, 32
-; ZVBB-NEXT: vsetvli a1, zero, e32, m4, ta, ma
; ZVBB-NEXT: vwsll.vx v8, v16, a0
; ZVBB-NEXT: vwsll.vx v0, v20, a0
; ZVBB-NEXT: vwaddu.wv v8, v8, v24
@@ -575,8 +575,8 @@ declare <vscale x 4 x double> @llvm.vector.interleave2.nxv4f64(<vscale x 2 x dou
define <vscale x 64 x bfloat> @vector_interleave_nxv64bf16_nxv32bf16(<vscale x 32 x bfloat> %a, <vscale x 32 x bfloat> %b) {
; CHECK-LABEL: vector_interleave_nxv64bf16_nxv32bf16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vwaddu.vv v8, v24, v16
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vwaddu.vv v0, v28, v20
@@ -587,8 +587,8 @@ define <vscale x 64 x bfloat> @vector_interleave_nxv64bf16_nxv32bf16(<vscale x 3
;
; ZVBB-LABEL: vector_interleave_nxv64bf16_nxv32bf16:
; ZVBB: # %bb.0:
-; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vwsll.vi v8, v16, 16
; ZVBB-NEXT: vwsll.vi v0, v20, 16
; ZVBB-NEXT: vwaddu.wv v8, v8, v24
@@ -602,8 +602,8 @@ define <vscale x 64 x bfloat> @vector_interleave_nxv64bf16_nxv32bf16(<vscale x 3
define <vscale x 64 x half> @vector_interleave_nxv64f16_nxv32f16(<vscale x 32 x half> %a, <vscale x 32 x half> %b) {
; CHECK-LABEL: vector_interleave_nxv64f16_nxv32f16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vwaddu.vv v8, v24, v16
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vwaddu.vv v0, v28, v20
@@ -614,8 +614,8 @@ define <vscale x 64 x half> @vector_interleave_nxv64f16_nxv32f16(<vscale x 32 x
;
; ZVBB-LABEL: vector_interleave_nxv64f16_nxv32f16:
; ZVBB: # %bb.0:
-; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vsetvli a0, zero, e16, m4, ta, ma
+; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: vwsll.vi v8, v16, 16
; ZVBB-NEXT: vwsll.vi v0, v20, 16
; ZVBB-NEXT: vwaddu.wv v8, v8, v24
@@ -629,8 +629,8 @@ define <vscale x 64 x half> @vector_interleave_nxv64f16_nxv32f16(<vscale x 32 x
define <vscale x 32 x float> @vector_interleave_nxv32f32_nxv16f32(<vscale x 16 x float> %a, <vscale x 16 x float> %b) {
; CHECK-LABEL: vector_interleave_nxv32f32_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vsetvli a0, zero, e32, m4, ta, ma
+; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: vwaddu.vv v8, v24, v16
; CHECK-NEXT: li a0, -1
; CHECK-NEXT: vwaddu.vv v0, v28, v20
@@ -641,9 +641,9 @@ define <vscale x 32 x float> @vector_interleave_nxv32f32_nxv16f32(<vscale x 16 x
;
; ZVBB-LABEL: vector_interleave_nxv32f32_nxv16f32:
; ZVBB: # %bb.0:
+; ZVBB-NEXT: vsetvli a0, zero, e32, m4, ta, ma
; ZVBB-NEXT: vmv8r.v v24, v8
; ZVBB-NEXT: li a0, 32
-; ZVBB-NEXT: vsetvli a1, zero, e32, m4, ta, ma
; ZVBB-NEXT: vwsll.vx v8, v16, a0
; ZVBB-NEXT: vwsll.vx v0, v20, a0
; ZVBB-NEXT: vwaddu.wv v8, v8, v24
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll b/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
index 79bd60d1702f32..81ab670c3c283e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
@@ -120,6 +120,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_passthru(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2) nounwind {
; CHECK-LABEL: vadd_vv_passthru:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, tu, ma
; CHECK-NEXT: vadd.vv v10, v8, v9
@@ -152,6 +153,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_passthru_negative(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2) nounwind {
; CHECK-LABEL: vadd_vv_passthru_negative:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, tu, ma
; CHECK-NEXT: vadd.vv v10, v8, v9
@@ -183,6 +185,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_mask(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2, <vscale x 1 x i1> %m) nounwind {
; CHECK-LABEL: vadd_vv_mask:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vadd.vv v10, v8, v9, v0.t
@@ -218,6 +221,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_mask_negative(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2, <vscale x 1 x i1> %m, <vscale x 1 x i1> %m2) nounwind {
; CHECK-LABEL: vadd_vv_mask_negative:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vadd.vv v11, v8, v9, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-splice.ll b/llvm/test/CodeGen/RISCV/rvv/vector-splice.ll
index 6a72043ca7e8e6..90d798b167cfc5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-splice.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-splice.ll
@@ -11,9 +11,9 @@ declare <vscale x 1 x i1> @llvm.vector.splice.nxv1i1(<vscale x 1 x i1>, <vscale
define <vscale x 1 x i1> @splice_nxv1i1_offset_negone(<vscale x 1 x i1> %a, <vscale x 1 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv1i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -33,9 +33,9 @@ define <vscale x 1 x i1> @splice_nxv1i1_offset_negone(<vscale x 1 x i1> %a, <vsc
define <vscale x 1 x i1> @splice_nxv1i1_offset_max(<vscale x 1 x i1> %a, <vscale x 1 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv1i1_offset_max:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -59,9 +59,9 @@ declare <vscale x 2 x i1> @llvm.vector.splice.nxv2i1(<vscale x 2 x i1>, <vscale
define <vscale x 2 x i1> @splice_nxv2i1_offset_negone(<vscale x 2 x i1> %a, <vscale x 2 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv2i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -81,9 +81,9 @@ define <vscale x 2 x i1> @splice_nxv2i1_offset_negone(<vscale x 2 x i1> %a, <vsc
define <vscale x 2 x i1> @splice_nxv2i1_offset_max(<vscale x 2 x i1> %a, <vscale x 2 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv2i1_offset_max:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -107,9 +107,9 @@ declare <vscale x 4 x i1> @llvm.vector.splice.nxv4i1(<vscale x 4 x i1>, <vscale
define <vscale x 4 x i1> @splice_nxv4i1_offset_negone(<vscale x 4 x i1> %a, <vscale x 4 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv4i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -129,9 +129,9 @@ define <vscale x 4 x i1> @splice_nxv4i1_offset_negone(<vscale x 4 x i1> %a, <vsc
define <vscale x 4 x i1> @splice_nxv4i1_offset_max(<vscale x 4 x i1> %a, <vscale x 4 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv4i1_offset_max:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -155,9 +155,9 @@ declare <vscale x 8 x i1> @llvm.vector.splice.nxv8i1(<vscale x 8 x i1>, <vscale
define <vscale x 8 x i1> @splice_nxv8i1_offset_negone(<vscale x 8 x i1> %a, <vscale x 8 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv8i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -176,9 +176,9 @@ define <vscale x 8 x i1> @splice_nxv8i1_offset_negone(<vscale x 8 x i1> %a, <vsc
define <vscale x 8 x i1> @splice_nxv8i1_offset_max(<vscale x 8 x i1> %a, <vscale x 8 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv8i1_offset_max:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v10, v8, 1, v0
@@ -201,9 +201,9 @@ declare <vscale x 16 x i1> @llvm.vector.splice.nxv16i1(<vscale x 16 x i1>, <vsca
define <vscale x 16 x i1> @splice_nxv16i1_offset_negone(<vscale x 16 x i1> %a, <vscale x 16 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv16i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v12, v10, 1, v0
@@ -223,9 +223,9 @@ define <vscale x 16 x i1> @splice_nxv16i1_offset_negone(<vscale x 16 x i1> %a, <
define <vscale x 16 x i1> @splice_nxv16i1_offset_max(<vscale x 16 x i1> %a, <vscale x 16 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv16i1_offset_max:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v12, v10, 1, v0
@@ -249,9 +249,9 @@ declare <vscale x 32 x i1> @llvm.vector.splice.nxv32i1(<vscale x 32 x i1>, <vsca
define <vscale x 32 x i1> @splice_nxv32i1_offset_negone(<vscale x 32 x i1> %a, <vscale x 32 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv32i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: vmv.v.i v12, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v16, v12, 1, v0
@@ -296,9 +296,9 @@ declare <vscale x 64 x i1> @llvm.vector.splice.nxv64i1(<vscale x 64 x i1>, <vsca
define <vscale x 64 x i1> @splice_nxv64i1_offset_negone(<vscale x 64 x i1> %a, <vscale x 64 x i1> %b) #0 {
; CHECK-LABEL: splice_nxv64i1_offset_negone:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv.v.i v24, 0
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: vmerge.vim v16, v24, 1, v0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfabs-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfabs-vp.ll
index 2c92a5da8eecb7..8f9f9c4256c8f1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfabs-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfabs-vp.ll
@@ -462,11 +462,11 @@ declare <vscale x 16 x double> @llvm.vp.fabs.nxv16f64(<vscale x 16 x double>, <v
define <vscale x 16 x double> @vfabs_vv_nxv16f64(<vscale x 16 x double> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfabs_vv_nxv16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll
index 1953cfd2a0169f..87bc9f27d6dc96 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll
@@ -411,11 +411,11 @@ define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20
; CHECK-NEXT: slli a1, a2, 1
; CHECK-NEXT: srli a2, a2, 2
@@ -518,12 +518,12 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: fmv.x.h a1, fa0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v20
; CHECK-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; CHECK-NEXT: vmv.v.x v16, a1
@@ -604,10 +604,10 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: fmv.x.h a1, fa0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; CHECK-NEXT: vmset.m v7
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
@@ -1205,11 +1205,11 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v20
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
@@ -1324,12 +1324,12 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v20
; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a1
@@ -1416,10 +1416,10 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v7
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll
index ccd286b7ee5fd3..061af454aa8bab 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll
@@ -373,11 +373,11 @@ define <vscale x 32 x bfloat> @vfdiv_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20
; CHECK-NEXT: slli a1, a2, 1
; CHECK-NEXT: srli a2, a2, 2
@@ -480,12 +480,12 @@ define <vscale x 32 x bfloat> @vfdiv_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: fmv.x.h a1, fa0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v20
; CHECK-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; CHECK-NEXT: vmv.v.x v16, a1
@@ -566,10 +566,10 @@ define <vscale x 32 x bfloat> @vfdiv_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: fmv.x.h a1, fa0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; CHECK-NEXT: vmset.m v7
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
@@ -1117,11 +1117,11 @@ define <vscale x 32 x half> @vfdiv_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v20
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
@@ -1236,12 +1236,12 @@ define <vscale x 32 x half> @vfdiv_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v20
; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a1
@@ -1328,10 +1328,10 @@ define <vscale x 32 x half> @vfdiv_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v7
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfirst.ll b/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
index eafd605c6110eb..906d0642143cb9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
@@ -43,6 +43,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv1i1(
define iXLen @intrinsic_vfirst_mask_m_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -97,6 +98,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv2i1(
define iXLen @intrinsic_vfirst_mask_m_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv2i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -137,6 +139,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv4i1(
define iXLen @intrinsic_vfirst_mask_m_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv4i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -177,6 +180,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv8i1(
define iXLen @intrinsic_vfirst_mask_m_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv8i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -217,6 +221,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv16i1(
define iXLen @intrinsic_vfirst_mask_m_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv16i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -257,6 +262,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv32i1(
define iXLen @intrinsic_vfirst_mask_m_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv32i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -297,6 +303,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv64i1(
define iXLen @intrinsic_vfirst_mask_m_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv64i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
index fd518d9be786de..c969e34411fc90 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
@@ -628,6 +628,7 @@ define <vscale x 32 x bfloat> @vfma_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK-NEXT: add a2, a2, a3
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vl8re16.v v0, (a0)
; CHECK-NEXT: csrr a2, vlenb
@@ -639,7 +640,6 @@ define <vscale x 32 x bfloat> @vfma_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK-NEXT: add a4, sp, a4
; CHECK-NEXT: addi a4, a4, 16
; CHECK-NEXT: vs1r.v v24, (a4) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v24, v24, a2
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: slli a2, a2, 3
@@ -2193,6 +2193,7 @@ define <vscale x 32 x half> @vfma_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: add a2, a2, a3
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v24, v0
; ZVFHMIN-NEXT: vl8re16.v v0, (a0)
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -2204,7 +2205,6 @@ define <vscale x 32 x half> @vfma_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: add a4, sp, a4
; ZVFHMIN-NEXT: addi a4, a4, 16
; ZVFHMIN-NEXT: vs1r.v v24, (a4) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vslidedown.vx v24, v24, a2
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: slli a2, a2, 3
@@ -3662,6 +3662,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: add a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x30, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 48 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -3708,7 +3709,6 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: vl8re64.v v8, (a0)
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: and a0, a7, a6
; CHECK-NEXT: csrr a2, vlenb
@@ -7796,6 +7796,7 @@ define <vscale x 16 x half> @vfnmadd_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vfnmadd_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: lui a1, 8
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
@@ -8152,9 +8153,9 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_neg_splat_commute(<vscale x 16
;
; ZVFHMIN-LABEL: vfnmadd_vf_nxv16f16_neg_splat_commute:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: lui a1, 8
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
@@ -8253,6 +8254,7 @@ define <vscale x 16 x half> @vfnmsub_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vfnmsub_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: lui a1, 8
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
@@ -8553,9 +8555,9 @@ define <vscale x 16 x half> @vfnmsub_vf_nxv16f16_neg_splat(<vscale x 16 x half>
;
; ZVFHMIN-LABEL: vfnmsub_vf_nxv16f16_neg_splat:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: lui a1, 8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v12
@@ -8712,6 +8714,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: add a2, sp, a2
; ZVFHMIN-NEXT: addi a2, a2, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
+; ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v8
; ZVFHMIN-NEXT: vl8re16.v v8, (a0)
; ZVFHMIN-NEXT: lui a2, 8
@@ -9277,10 +9280,10 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %v
; ZVFHMIN-NEXT: add a1, a1, a2
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x28, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 40 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: fmv.x.h a2, fa0
; ZVFHMIN-NEXT: lui a1, 8
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v7
; ZVFHMIN-NEXT: csrr a3, vlenb
; ZVFHMIN-NEXT: csrr a4, vlenb
@@ -10001,11 +10004,11 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_unmasked_commuted(<vscale x 32
; ZVFHMIN-NEXT: add a2, a2, a3
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: vl8re16.v v24, (a0)
; ZVFHMIN-NEXT: lui a2, 8
-; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v8
; ZVFHMIN-NEXT: csrr a3, vlenb
; ZVFHMIN-NEXT: vsetvli zero, a1, e16, m8, ta, ma
@@ -11154,11 +11157,11 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_unmasked_commute(<vsc
; ZVFHMIN-NEXT: add a1, a1, a2
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: lui a2, 8
-; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmset.m v8
; ZVFHMIN-NEXT: csrr a3, vlenb
; ZVFHMIN-NEXT: vmv.v.x v24, a1
@@ -11766,11 +11769,11 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_unmasked_commuted(<vscale x 32
; ZVFHMIN-NEXT: add a2, a2, a3
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x29, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 41 * vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: vl8re16.v v24, (a0)
; ZVFHMIN-NEXT: lui a2, 8
-; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v8
; ZVFHMIN-NEXT: csrr a3, vlenb
; ZVFHMIN-NEXT: vsetvli zero, a1, e16, m8, ta, ma
@@ -11924,11 +11927,11 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, half
; ZVFHMIN-NEXT: add a1, a1, a2
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x28, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 40 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: fmv.x.h a2, fa0
; ZVFHMIN-NEXT: lui a3, 8
; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: vsetvli a4, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a2
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: slli a2, a2, 5
@@ -12075,11 +12078,11 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_commute(<vscale x 32 x half> %v
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: fmv.x.h a2, fa0
; ZVFHMIN-NEXT: lui a3, 8
; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: vsetvli a4, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a2
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: slli a2, a2, 3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll
index 1d471ab2404b17..a4f3a7d3a09a87 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll
@@ -227,6 +227,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv8r.v v0, v16
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -237,7 +238,6 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v16
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: li a1, 24
@@ -314,6 +314,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 24 * vlenb
+; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv8r.v v24, v16
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
@@ -321,7 +322,6 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; CHECK-NEXT: fmv.x.h a0, fa0
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -664,6 +664,7 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -674,7 +675,6 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: add a0, sp, a0
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: li a1, 24
@@ -757,6 +757,7 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 24 * vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: slli a0, a0, 3
@@ -764,7 +765,6 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll
index 88fd81a5a2f7bc..109023032e6ab5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll
@@ -226,6 +226,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH-NEXT: slli a1, a1, 5
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFH-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFH-NEXT: vmv8r.v v0, v16
; ZVFH-NEXT: addi a1, sp, 16
; ZVFH-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -236,7 +237,6 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH-NEXT: add a0, sp, a0
; ZVFH-NEXT: addi a0, a0, 16
; ZVFH-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v16
; ZVFH-NEXT: csrr a0, vlenb
; ZVFH-NEXT: slli a0, a0, 3
@@ -316,6 +316,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -326,7 +327,6 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN-NEXT: add a0, sp, a0
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: li a1, 24
@@ -402,12 +402,12 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH-NEXT: slli a0, a0, 5
; ZVFH-NEXT: sub sp, sp, a0
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFH-NEXT: vmv8r.v v0, v16
; ZVFH-NEXT: addi a0, sp, 16
; ZVFH-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFH-NEXT: vmv8r.v v16, v8
; ZVFH-NEXT: fmv.x.h a0, fa0
-; ZVFH-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v16
; ZVFH-NEXT: csrr a1, vlenb
; ZVFH-NEXT: slli a1, a1, 4
@@ -498,12 +498,12 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: addi a0, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v16
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: slli a1, a1, 4
@@ -875,6 +875,7 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -885,7 +886,6 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: add a0, sp, a0
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v16
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: li a1, 24
@@ -967,12 +967,12 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: addi a0, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v16
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: slli a1, a1, 4
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll
index dafcf8a1410d32..02d6229e992481 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll
@@ -183,11 +183,11 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20
; CHECK-NEXT: slli a1, a2, 1
; CHECK-NEXT: srli a2, a2, 2
@@ -516,11 +516,11 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v20
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll
index b3df6572f79369..f7f80299785d43 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll
@@ -183,11 +183,11 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20
; CHECK-NEXT: slli a1, a2, 1
; CHECK-NEXT: srli a2, a2, 2
@@ -516,11 +516,11 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v20
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll
index f4a236df4c9e4f..7e5523044a0103 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll
@@ -495,11 +495,11 @@ define <vscale x 32 x half> @vfmul_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v20
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
@@ -614,12 +614,12 @@ define <vscale x 32 x half> @vfmul_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v20
; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a1
@@ -706,10 +706,10 @@ define <vscale x 32 x half> @vfmul_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v7
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll
index d1702268f829fa..901f3cd63fa9e5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll
@@ -1112,6 +1112,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x30, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 48 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: li a3, 24
@@ -1154,7 +1155,6 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: vl8re64.v v8, (a0)
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: and a0, a7, a6
; CHECK-NEXT: csrr a2, vlenb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfneg-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfneg-vp.ll
index 343098e87649ea..bbab056f0ff461 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfneg-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfneg-vp.ll
@@ -450,11 +450,11 @@ declare <vscale x 16 x double> @llvm.vp.fneg.nxv16f64(<vscale x 16 x double>, <v
define <vscale x 16 x double> @vfneg_vv_nxv16f64(<vscale x 16 x double> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfneg_vv_nxv16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll
index 3705e73fda492e..b8ec285b5c34e7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll
@@ -329,6 +329,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: li a2, 24
@@ -338,7 +339,6 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vl8re16.v v16, (a0)
; ZVFHMIN-NEXT: lui a0, 8
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vxor.vx v8, v0, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll
index 80edf0e3a4d811..742083f6f684b6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll
@@ -310,6 +310,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: mul a1, a1, a2
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 24 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v16
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: slli a1, a1, 4
@@ -323,7 +324,6 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: vl8re16.v v24, (a0)
; ZVFHMIN-NEXT: lui a0, 8
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v0
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vxor.vx v0, v24, a0
@@ -389,6 +389,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: add a0, sp, a0
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a0) # Unknown-size Folded Spill
+; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v0, v8
; ZVFHMIN-NEXT: csrr a0, vlenb
; ZVFHMIN-NEXT: slli a0, a0, 4
@@ -396,7 +397,6 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: addi a0, a0, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v24, a0
; ZVFHMIN-NEXT: lui a0, 8
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfpext-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfpext-vp.ll
index 341fe678183b6f..6debc474606836 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfpext-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfpext-vp.ll
@@ -96,11 +96,11 @@ declare <vscale x 32 x float> @llvm.vp.fpext.nxv32f32.nxv32f16(<vscale x 32 x ha
define <vscale x 32 x float> @vfpext_nxv32f16_nxv32f32(<vscale x 32 x half> %a, <vscale x 32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vfpext_nxv32f16_nxv32f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll
index cf195c7c0935e4..d990c74c67d5a4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll
@@ -508,11 +508,11 @@ declare <vscale x 32 x i16> @llvm.vp.fptosi.nxv32i16.nxv32f32(<vscale x 32 x flo
define <vscale x 32 x i16> @vfptosi_nxv32i16_nxv32f32(<vscale x 32 x float> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfptosi_nxv32i16_nxv32f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
@@ -538,11 +538,11 @@ declare <vscale x 32 x i32> @llvm.vp.fptosi.nxv32i32.nxv32f32(<vscale x 32 x flo
define <vscale x 32 x i32> @vfptosi_nxv32i32_nxv32f32(<vscale x 32 x float> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfptosi_nxv32i32_nxv32f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll
index 952d28604b86c6..3b24a648d97f5f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll
@@ -508,11 +508,11 @@ declare <vscale x 32 x i16> @llvm.vp.fptoui.nxv32i16.nxv32f32(<vscale x 32 x flo
define <vscale x 32 x i16> @vfptoui_nxv32i16_nxv32f32(<vscale x 32 x float> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfptoui_nxv32i16_nxv32f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
@@ -538,11 +538,11 @@ declare <vscale x 32 x i32> @llvm.vp.fptoui.nxv32i32.nxv32f32(<vscale x 32 x flo
define <vscale x 32 x i32> @vfptoui_nxv32i32_nxv32f32(<vscale x 32 x float> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfptoui_nxv32i32_nxv32f32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll
index 874813f0575953..63156e1399293f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll
@@ -102,13 +102,13 @@ define <vscale x 16 x float> @vfptrunc_nxv16f32_nxv16f64(<vscale x 16 x double>
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
@@ -147,6 +147,7 @@ define <vscale x 32 x float> @vfptrunc_nxv32f32_nxv32f64(<vscale x 32 x double>
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -160,7 +161,6 @@ define <vscale x 32 x float> @vfptrunc_nxv32f32_nxv32f64(<vscale x 32 x double>
; CHECK-NEXT: srli a5, a1, 2
; CHECK-NEXT: slli a6, a1, 3
; CHECK-NEXT: slli a4, a1, 1
-; CHECK-NEXT: vsetvli a7, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v16, v0, a5
; CHECK-NEXT: add a6, a0, a6
; CHECK-NEXT: sub a5, a2, a4
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfsqrt-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfsqrt-vp.ll
index 8edcf23988c7fb..8e57be1e0697c7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfsqrt-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfsqrt-vp.ll
@@ -167,12 +167,12 @@ declare <vscale x 32 x bfloat> @llvm.vp.sqrt.nxv32bf16(<vscale x 32 x bfloat>, <
define <vscale x 32 x bfloat> @vfsqrt_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfsqrt_vv_nxv32bf16:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: slli a1, a2, 1
; CHECK-NEXT: srli a2, a2, 2
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
@@ -452,12 +452,12 @@ define <vscale x 32 x half> @vfsqrt_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
;
; ZVFHMIN-LABEL: vfsqrt_vv_nxv32f16:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v16, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
; ZVFHMIN-NEXT: sub a3, a0, a1
-; ZVFHMIN-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vslidedown.vx v0, v0, a2
; ZVFHMIN-NEXT: sltu a2, a0, a3
; ZVFHMIN-NEXT: addi a2, a2, -1
@@ -749,11 +749,11 @@ declare <vscale x 16 x double> @llvm.vp.sqrt.nxv16f64(<vscale x 16 x double>, <v
define <vscale x 16 x double> @vfsqrt_vv_nxv16f64(<vscale x 16 x double> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfsqrt_vv_nxv16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll
index 25a80e66c4a527..d034f65479a159 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll
@@ -373,11 +373,11 @@ define <vscale x 32 x bfloat> @vfsub_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20
; CHECK-NEXT: slli a1, a2, 1
; CHECK-NEXT: srli a2, a2, 2
@@ -480,12 +480,12 @@ define <vscale x 32 x bfloat> @vfsub_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: fmv.x.h a1, fa0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v20
; CHECK-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; CHECK-NEXT: vmv.v.x v16, a1
@@ -566,10 +566,10 @@ define <vscale x 32 x bfloat> @vfsub_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: fmv.x.h a1, fa0
; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; CHECK-NEXT: vmset.m v7
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
@@ -1117,11 +1117,11 @@ define <vscale x 32 x half> @vfsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v20
; ZVFHMIN-NEXT: slli a1, a2, 1
; ZVFHMIN-NEXT: srli a2, a2, 2
@@ -1236,12 +1236,12 @@ define <vscale x 32 x half> @vfsub_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x11, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 17 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v20
; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv.v.x v16, a1
@@ -1328,10 +1328,10 @@ define <vscale x 32 x half> @vfsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v16, v8
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
; ZVFHMIN-NEXT: csrr a2, vlenb
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: vmset.m v7
; ZVFHMIN-NEXT: addi a3, sp, 16
; ZVFHMIN-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll b/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
index 1a1472fcfc66f5..fa67fa38144aae 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
@@ -111,6 +111,7 @@ define <vscale x 4 x i32> @different_vl_with_ta(<vscale x 4 x i32> %a, <vscale x
define <vscale x 4 x i32> @different_vl_with_tu(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: different_vl_with_tu:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v14, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v14, v10, v12
@@ -126,8 +127,8 @@ define <vscale x 4 x i32> @different_vl_with_tu(<vscale x 4 x i32> %passthru, <v
define <vscale x 4 x i32> @different_imm_vl_with_tu(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: different_imm_vl_with_tu:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv2r.v v14, v10
; CHECK-NEXT: vsetivli zero, 5, e32, m2, tu, ma
+; CHECK-NEXT: vmv2r.v v14, v10
; CHECK-NEXT: vadd.vv v14, v10, v12
; CHECK-NEXT: vsetivli zero, 4, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v8, v14, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
index 1516d656663b6b..2962f796cf10c0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
@@ -51,6 +51,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_dead_vl(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask) {
; CHECK-LABEL: test_vlseg2ff_mask_dead_vl:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
index b89097b8ff9744..4ec7dae561fad4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
@@ -25,6 +25,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -64,6 +65,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -103,6 +105,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -142,6 +145,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -181,6 +185,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -220,6 +225,7 @@ entry:
define <vscale x 32 x i8> @test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 32 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -259,6 +265,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t(target("riscv.vector.tuple", <vscale x 1 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -299,6 +306,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -339,6 +347,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -379,6 +388,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -419,6 +429,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -459,6 +470,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t(target("riscv.vector.tuple", <vscale x 1 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -500,6 +512,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -541,6 +554,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -582,6 +596,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -623,6 +638,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -664,6 +680,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t(target("riscv.vector.tuple", <vscale x 1 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -706,6 +723,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -748,6 +766,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -790,6 +809,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -832,6 +852,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t(target("riscv.vector.tuple", <vscale x 1 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -875,6 +896,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -918,6 +940,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -961,6 +984,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1004,6 +1028,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t(target("riscv.vector.tuple", <vscale x 1 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1048,6 +1073,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1092,6 +1118,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1136,6 +1163,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1180,6 +1208,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t(target("riscv.vector.tuple", <vscale x 1 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1225,6 +1254,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1270,6 +1300,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1315,6 +1346,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1359,6 +1391,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1397,6 +1430,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1435,6 +1469,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1473,6 +1508,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1511,6 +1547,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1549,6 +1586,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1588,6 +1626,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1627,6 +1666,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1666,6 +1706,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1705,6 +1746,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1745,6 +1787,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1785,6 +1828,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1825,6 +1869,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1865,6 +1910,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1906,6 +1952,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1947,6 +1994,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1988,6 +2036,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2030,6 +2079,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2072,6 +2122,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2114,6 +2165,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2157,6 +2209,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2200,6 +2253,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2243,6 +2297,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2287,6 +2342,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2331,6 +2387,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2375,6 +2432,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -2413,6 +2471,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -2451,6 +2510,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -2489,6 +2549,7 @@ entry:
define <vscale x 8 x i32> @test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -2527,6 +2588,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2566,6 +2628,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2605,6 +2668,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2644,6 +2708,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2684,6 +2749,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2724,6 +2790,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2764,6 +2831,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2805,6 +2873,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2846,6 +2915,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2888,6 +2958,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2930,6 +3001,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2973,6 +3045,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3016,6 +3089,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3060,6 +3134,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3104,6 +3179,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -3142,6 +3218,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -3180,6 +3257,7 @@ entry:
define <vscale x 4 x i64> @test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -3218,6 +3296,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3257,6 +3336,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3296,6 +3376,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3336,6 +3417,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3376,6 +3458,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3417,6 +3500,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3459,6 +3543,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3502,6 +3587,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3545,6 +3631,7 @@ entry:
define <vscale x 1 x half> @test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -3582,6 +3669,7 @@ entry:
define <vscale x 2 x half> @test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -3619,6 +3707,7 @@ entry:
define <vscale x 4 x half> @test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -3656,6 +3745,7 @@ entry:
define <vscale x 8 x half> @test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -3693,6 +3783,7 @@ entry:
define <vscale x 16 x half> @test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -3730,6 +3821,7 @@ entry:
define <vscale x 1 x half> @test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3768,6 +3860,7 @@ entry:
define <vscale x 2 x half> @test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3806,6 +3899,7 @@ entry:
define <vscale x 4 x half> @test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3844,6 +3938,7 @@ entry:
define <vscale x 8 x half> @test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3882,6 +3977,7 @@ entry:
define <vscale x 1 x half> @test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3921,6 +4017,7 @@ entry:
define <vscale x 2 x half> @test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3960,6 +4057,7 @@ entry:
define <vscale x 4 x half> @test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3999,6 +4097,7 @@ entry:
define <vscale x 8 x half> @test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4038,6 +4137,7 @@ entry:
define <vscale x 1 x half> @test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4078,6 +4178,7 @@ entry:
define <vscale x 2 x half> @test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4118,6 +4219,7 @@ entry:
define <vscale x 4 x half> @test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4158,6 +4260,7 @@ entry:
define <vscale x 1 x half> @test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4199,6 +4302,7 @@ entry:
define <vscale x 2 x half> @test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4240,6 +4344,7 @@ entry:
define <vscale x 4 x half> @test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4281,6 +4386,7 @@ entry:
define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4323,6 +4429,7 @@ entry:
define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4365,6 +4472,7 @@ entry:
define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4407,6 +4515,7 @@ entry:
define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4450,6 +4559,7 @@ entry:
define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4493,6 +4603,7 @@ entry:
define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4536,6 +4647,7 @@ entry:
define <vscale x 1 x float> @test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -4573,6 +4685,7 @@ entry:
define <vscale x 2 x float> @test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -4610,6 +4723,7 @@ entry:
define <vscale x 4 x float> @test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -4647,6 +4761,7 @@ entry:
define <vscale x 8 x float> @test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -4684,6 +4799,7 @@ entry:
define <vscale x 1 x float> @test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4722,6 +4838,7 @@ entry:
define <vscale x 2 x float> @test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4760,6 +4877,7 @@ entry:
define <vscale x 4 x float> @test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4798,6 +4916,7 @@ entry:
define <vscale x 1 x float> @test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4837,6 +4956,7 @@ entry:
define <vscale x 2 x float> @test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4876,6 +4996,7 @@ entry:
define <vscale x 4 x float> @test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4915,6 +5036,7 @@ entry:
define <vscale x 1 x float> @test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4955,6 +5077,7 @@ entry:
define <vscale x 2 x float> @test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4995,6 +5118,7 @@ entry:
define <vscale x 1 x float> @test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5036,6 +5160,7 @@ entry:
define <vscale x 2 x float> @test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5077,6 +5202,7 @@ entry:
define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5119,6 +5245,7 @@ entry:
define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5161,6 +5288,7 @@ entry:
define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5204,6 +5332,7 @@ entry:
define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5247,6 +5376,7 @@ entry:
define <vscale x 1 x double> @test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -5284,6 +5414,7 @@ entry:
define <vscale x 2 x double> @test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -5321,6 +5452,7 @@ entry:
define <vscale x 4 x double> @test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -5358,6 +5490,7 @@ entry:
define <vscale x 1 x double> @test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5396,6 +5529,7 @@ entry:
define <vscale x 2 x double> @test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5434,6 +5568,7 @@ entry:
define <vscale x 1 x double> @test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5473,6 +5608,7 @@ entry:
define <vscale x 2 x double> @test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5512,6 +5648,7 @@ entry:
define <vscale x 1 x double> @test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5552,6 +5689,7 @@ entry:
define <vscale x 1 x double> @test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5593,6 +5731,7 @@ entry:
define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5635,6 +5774,7 @@ entry:
define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5678,6 +5818,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -5715,6 +5856,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -5752,6 +5894,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -5789,6 +5932,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -5826,6 +5970,7 @@ entry:
define <vscale x 16 x bfloat> @test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -5863,6 +6008,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5901,6 +6047,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5939,6 +6086,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5977,6 +6125,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6015,6 +6164,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6054,6 +6204,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6093,6 +6244,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6132,6 +6284,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6171,6 +6324,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6211,6 +6365,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6251,6 +6406,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6291,6 +6447,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6332,6 +6489,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6373,6 +6531,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6414,6 +6573,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6456,6 +6616,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6498,6 +6659,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6540,6 +6702,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6583,6 +6746,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6626,6 +6790,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
index 3dc0db90b6d854..e7d4195e2174f1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
@@ -51,6 +51,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_dead_vl(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask) {
; CHECK-LABEL: test_vlseg2ff_mask_dead_vl:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
index 68acb3beb06867..5da529a73a5240 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
@@ -25,6 +25,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -64,6 +65,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -103,6 +105,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -142,6 +145,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -181,6 +185,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -220,6 +225,7 @@ entry:
define <vscale x 32 x i8> @test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 32 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -259,6 +265,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t(target("riscv.vector.tuple", <vscale x 1 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -299,6 +306,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -339,6 +347,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -379,6 +388,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -419,6 +429,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -459,6 +470,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t(target("riscv.vector.tuple", <vscale x 1 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -500,6 +512,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -541,6 +554,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -582,6 +596,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -623,6 +638,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -664,6 +680,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t(target("riscv.vector.tuple", <vscale x 1 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -706,6 +723,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -748,6 +766,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -790,6 +809,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -832,6 +852,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t(target("riscv.vector.tuple", <vscale x 1 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -875,6 +896,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -918,6 +940,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -961,6 +984,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1004,6 +1028,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t(target("riscv.vector.tuple", <vscale x 1 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1048,6 +1073,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1092,6 +1118,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1136,6 +1163,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1180,6 +1208,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t(target("riscv.vector.tuple", <vscale x 1 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1225,6 +1254,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1270,6 +1300,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1315,6 +1346,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1359,6 +1391,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1397,6 +1430,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1435,6 +1469,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1473,6 +1508,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1511,6 +1547,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1549,6 +1586,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1588,6 +1626,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1627,6 +1666,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1666,6 +1706,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1705,6 +1746,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1745,6 +1787,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1785,6 +1828,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1825,6 +1869,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1865,6 +1910,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1906,6 +1952,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1947,6 +1994,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1988,6 +2036,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2030,6 +2079,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2072,6 +2122,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2114,6 +2165,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2157,6 +2209,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2200,6 +2253,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2243,6 +2297,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2287,6 +2342,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2331,6 +2387,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2375,6 +2432,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -2413,6 +2471,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -2451,6 +2510,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -2489,6 +2549,7 @@ entry:
define <vscale x 8 x i32> @test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -2527,6 +2588,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2566,6 +2628,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2605,6 +2668,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2644,6 +2708,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2684,6 +2749,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2724,6 +2790,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2764,6 +2831,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2805,6 +2873,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2846,6 +2915,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2888,6 +2958,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2930,6 +3001,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2973,6 +3045,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3016,6 +3089,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3060,6 +3134,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3104,6 +3179,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -3142,6 +3218,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -3180,6 +3257,7 @@ entry:
define <vscale x 4 x i64> @test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -3218,6 +3296,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3257,6 +3336,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3296,6 +3376,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3336,6 +3417,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3376,6 +3458,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3417,6 +3500,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3459,6 +3543,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3502,6 +3587,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3545,6 +3631,7 @@ entry:
define <vscale x 1 x half> @test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -3582,6 +3669,7 @@ entry:
define <vscale x 2 x half> @test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -3619,6 +3707,7 @@ entry:
define <vscale x 4 x half> @test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -3656,6 +3745,7 @@ entry:
define <vscale x 8 x half> @test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -3693,6 +3783,7 @@ entry:
define <vscale x 16 x half> @test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -3730,6 +3821,7 @@ entry:
define <vscale x 1 x half> @test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3768,6 +3860,7 @@ entry:
define <vscale x 2 x half> @test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3806,6 +3899,7 @@ entry:
define <vscale x 4 x half> @test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3844,6 +3938,7 @@ entry:
define <vscale x 8 x half> @test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3882,6 +3977,7 @@ entry:
define <vscale x 1 x half> @test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3921,6 +4017,7 @@ entry:
define <vscale x 2 x half> @test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3960,6 +4057,7 @@ entry:
define <vscale x 4 x half> @test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3999,6 +4097,7 @@ entry:
define <vscale x 8 x half> @test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4038,6 +4137,7 @@ entry:
define <vscale x 1 x half> @test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4078,6 +4178,7 @@ entry:
define <vscale x 2 x half> @test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4118,6 +4219,7 @@ entry:
define <vscale x 4 x half> @test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4158,6 +4260,7 @@ entry:
define <vscale x 1 x half> @test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4199,6 +4302,7 @@ entry:
define <vscale x 2 x half> @test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4240,6 +4344,7 @@ entry:
define <vscale x 4 x half> @test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4281,6 +4386,7 @@ entry:
define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4323,6 +4429,7 @@ entry:
define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4365,6 +4472,7 @@ entry:
define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4407,6 +4515,7 @@ entry:
define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4450,6 +4559,7 @@ entry:
define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4493,6 +4603,7 @@ entry:
define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4536,6 +4647,7 @@ entry:
define <vscale x 1 x float> @test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -4573,6 +4685,7 @@ entry:
define <vscale x 2 x float> @test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -4610,6 +4723,7 @@ entry:
define <vscale x 4 x float> @test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -4647,6 +4761,7 @@ entry:
define <vscale x 8 x float> @test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -4684,6 +4799,7 @@ entry:
define <vscale x 1 x float> @test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4722,6 +4838,7 @@ entry:
define <vscale x 2 x float> @test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4760,6 +4877,7 @@ entry:
define <vscale x 4 x float> @test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4798,6 +4916,7 @@ entry:
define <vscale x 1 x float> @test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4837,6 +4956,7 @@ entry:
define <vscale x 2 x float> @test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4876,6 +4996,7 @@ entry:
define <vscale x 4 x float> @test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4915,6 +5036,7 @@ entry:
define <vscale x 1 x float> @test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4955,6 +5077,7 @@ entry:
define <vscale x 2 x float> @test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4995,6 +5118,7 @@ entry:
define <vscale x 1 x float> @test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5036,6 +5160,7 @@ entry:
define <vscale x 2 x float> @test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5077,6 +5202,7 @@ entry:
define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5119,6 +5245,7 @@ entry:
define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5161,6 +5288,7 @@ entry:
define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5204,6 +5332,7 @@ entry:
define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5247,6 +5376,7 @@ entry:
define <vscale x 1 x double> @test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -5284,6 +5414,7 @@ entry:
define <vscale x 2 x double> @test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -5321,6 +5452,7 @@ entry:
define <vscale x 4 x double> @test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -5358,6 +5490,7 @@ entry:
define <vscale x 1 x double> @test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5396,6 +5529,7 @@ entry:
define <vscale x 2 x double> @test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5434,6 +5568,7 @@ entry:
define <vscale x 1 x double> @test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5473,6 +5608,7 @@ entry:
define <vscale x 2 x double> @test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5512,6 +5648,7 @@ entry:
define <vscale x 1 x double> @test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5552,6 +5689,7 @@ entry:
define <vscale x 1 x double> @test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5593,6 +5731,7 @@ entry:
define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5635,6 +5774,7 @@ entry:
define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5678,6 +5818,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -5715,6 +5856,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -5752,6 +5894,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -5789,6 +5932,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -5826,6 +5970,7 @@ entry:
define <vscale x 16 x bfloat> @test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -5863,6 +6008,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5901,6 +6047,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5939,6 +6086,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5977,6 +6125,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6015,6 +6164,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6054,6 +6204,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6093,6 +6244,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6132,6 +6284,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6171,6 +6324,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6211,6 +6365,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6251,6 +6406,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6291,6 +6447,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6332,6 +6489,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6373,6 +6531,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6414,6 +6573,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6456,6 +6616,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6498,6 +6659,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6540,6 +6702,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6583,6 +6746,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6626,6 +6790,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll
index 0b553d3cd6fdf4..b839cd595f3bef 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll
@@ -412,8 +412,8 @@ declare <vscale x 128 x i8> @llvm.vp.smax.nxv128i8(<vscale x 128 x i8>, <vscale
define <vscale x 128 x i8> @vmax_vx_nxv128i8(<vscale x 128 x i8> %va, i8 %b, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmax_vx_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -974,11 +974,11 @@ declare <vscale x 32 x i32> @llvm.vp.smax.nxv32i32(<vscale x 32 x i32>, <vscale
define <vscale x 32 x i32> @vmax_vx_nxv32i32(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmax_vx_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: srli a3, a2, 2
; CHECK-NEXT: slli a2, a2, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
@@ -1034,11 +1034,11 @@ declare i32 @llvm.vscale.i32()
define <vscale x 32 x i32> @vmax_vx_nxv32i32_evl_nx8(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m) {
; CHECK-LABEL: vmax_vx_nxv32i32_evl_nx8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a3, a1, 2
; CHECK-NEXT: slli a2, a1, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll
index f6be882f742062..99e0dfaf90a2d8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll
@@ -410,8 +410,8 @@ declare <vscale x 128 x i8> @llvm.vp.umax.nxv128i8(<vscale x 128 x i8>, <vscale
define <vscale x 128 x i8> @vmaxu_vx_nxv128i8(<vscale x 128 x i8> %va, i8 %b, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmaxu_vx_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -972,11 +972,11 @@ declare <vscale x 32 x i32> @llvm.vp.umax.nxv32i32(<vscale x 32 x i32>, <vscale
define <vscale x 32 x i32> @vmaxu_vx_nxv32i32(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmaxu_vx_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: srli a3, a2, 2
; CHECK-NEXT: slli a2, a2, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
@@ -1032,11 +1032,11 @@ declare i32 @llvm.vscale.i32()
define <vscale x 32 x i32> @vmaxu_vx_nxv32i32_evl_nx8(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m) {
; CHECK-LABEL: vmaxu_vx_nxv32i32_evl_nx8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a3, a1, 2
; CHECK-NEXT: slli a2, a1, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
index 3ebfc68ddee4b9..7216789241f9c8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v10
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfeq.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfeq_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v12
@@ -289,6 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -340,6 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v10
@@ -442,6 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v12
@@ -493,6 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -544,6 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v10
@@ -595,6 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v12
@@ -646,6 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -693,6 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -740,6 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -787,6 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -834,6 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfeq.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfeq_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -881,6 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -928,6 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -975,6 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1022,6 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1069,6 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1116,6 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1163,6 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
index e041e5874a8dc7..c50653730b38ca 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v10, v8
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfge.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfge_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v12, v8
@@ -289,6 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -340,6 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -391,6 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v10, v8
@@ -442,6 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v12, v8
@@ -493,6 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -544,6 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v10, v8
@@ -595,6 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v12, v8
@@ -646,6 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -693,6 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -740,6 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -787,6 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -834,6 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfge.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfge_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -881,6 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -928,6 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -975,6 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1022,6 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1069,6 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1116,6 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1163,6 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
index 0faaf4ebf255d4..6370e8287c77e3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v10, v8
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfgt.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfgt_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v12, v8
@@ -289,6 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -340,6 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -391,6 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v10, v8
@@ -442,6 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v12, v8
@@ -493,6 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -544,6 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v10, v8
@@ -595,6 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v12, v8
@@ -646,6 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -693,6 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -740,6 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -787,6 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -834,6 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfgt.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfgt_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -881,6 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -928,6 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -975,6 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1022,6 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1069,6 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1116,6 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1163,6 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
index ef5de6bc3481fb..d9ccc19b62095f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v10
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfle.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfle_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v12
@@ -289,6 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -340,6 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v10
@@ -442,6 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v12
@@ -493,6 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -544,6 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v10
@@ -595,6 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v12
@@ -646,6 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -693,6 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -740,6 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -787,6 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -834,6 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfle.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfle_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -881,6 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -928,6 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -975,6 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1022,6 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1069,6 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1116,6 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1163,6 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
index 0b7740d5e00457..d6b2163a82ca08 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v10
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmflt.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmflt_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v12
@@ -289,6 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -340,6 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v10
@@ -442,6 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v12
@@ -493,6 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -544,6 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v10
@@ -595,6 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v12
@@ -646,6 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -693,6 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -740,6 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -787,6 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -834,6 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmflt.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmflt_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -881,6 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -928,6 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -975,6 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1022,6 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1069,6 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1116,6 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1163,6 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
index 65a04e504a973b..e93f6cec3971ce 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v10
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfne.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfne_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v12
@@ -289,6 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -340,6 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v10
@@ -442,6 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v12
@@ -493,6 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -544,6 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v10
@@ -595,6 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v12
@@ -646,6 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -693,6 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -740,6 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -787,6 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -834,6 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfne.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfne_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -881,6 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -928,6 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -975,6 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1022,6 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1069,6 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1116,6 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1163,6 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll
index 8690014cc2c9df..3441934fb1550e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll
@@ -412,8 +412,8 @@ declare <vscale x 128 x i8> @llvm.vp.smin.nxv128i8(<vscale x 128 x i8>, <vscale
define <vscale x 128 x i8> @vmin_vx_nxv128i8(<vscale x 128 x i8> %va, i8 %b, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmin_vx_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -974,11 +974,11 @@ declare <vscale x 32 x i32> @llvm.vp.smin.nxv32i32(<vscale x 32 x i32>, <vscale
define <vscale x 32 x i32> @vmin_vx_nxv32i32(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmin_vx_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: srli a3, a2, 2
; CHECK-NEXT: slli a2, a2, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
@@ -1034,11 +1034,11 @@ declare i32 @llvm.vscale.i32()
define <vscale x 32 x i32> @vmin_vx_nxv32i32_evl_nx8(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m) {
; CHECK-LABEL: vmin_vx_nxv32i32_evl_nx8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a3, a1, 2
; CHECK-NEXT: slli a2, a1, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll
index 414807829d5630..bbf4c886bfe9fb 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll
@@ -410,8 +410,8 @@ declare <vscale x 128 x i8> @llvm.vp.umin.nxv128i8(<vscale x 128 x i8>, <vscale
define <vscale x 128 x i8> @vminu_vx_nxv128i8(<vscale x 128 x i8> %va, i8 %b, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vminu_vx_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -972,11 +972,11 @@ declare <vscale x 32 x i32> @llvm.vp.umin.nxv32i32(<vscale x 32 x i32>, <vscale
define <vscale x 32 x i32> @vminu_vx_nxv32i32(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vminu_vx_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: srli a3, a2, 2
; CHECK-NEXT: slli a2, a2, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
@@ -1032,11 +1032,11 @@ declare i32 @llvm.vscale.i32()
define <vscale x 32 x i32> @vminu_vx_nxv32i32_evl_nx8(<vscale x 32 x i32> %va, i32 %b, <vscale x 32 x i1> %m) {
; CHECK-LABEL: vminu_vx_nxv32i32_evl_nx8:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a3, a1, 2
; CHECK-NEXT: slli a2, a1, 1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a3
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: sltu a4, a1, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll b/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
index d1f344d52763db..023e0ffdea0c24 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
@@ -31,6 +31,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsbf.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -73,6 +74,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsbf.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsbf_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -115,6 +117,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsbf.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsbf_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -157,6 +160,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsbf.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsbf_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -199,6 +203,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsbf.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsbf_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -241,6 +246,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsbf.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsbf_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -283,6 +289,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsbf.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
index 1fd2383c40d185..4ba813e88faf07 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmseq.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmseq.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
index 2dc133d169f0a8..62d4a6df5ddd81 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsge.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -953,6 +971,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1001,6 +1020,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1049,6 +1069,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1069,6 +1090,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i8_i8_1(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i8_i8_1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: li a1, 99
; CHECK-NEXT: vmv1r.v v0, v9
@@ -1136,6 +1158,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1184,6 +1207,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1232,6 +1256,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsge.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1280,6 +1305,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1376,6 +1403,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1424,6 +1452,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1472,6 +1501,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1520,6 +1550,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1568,6 +1599,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1616,6 +1648,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1664,6 +1697,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1739,6 +1773,7 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1814,6 +1849,7 @@ define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1889,6 +1925,7 @@ define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1924,6 +1961,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1959,6 +1997,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -2025,6 +2064,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2060,6 +2100,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2095,6 +2136,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2130,6 +2172,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2165,6 +2208,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2200,6 +2244,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2235,6 +2280,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2270,6 +2316,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2305,6 +2352,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2340,6 +2388,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2375,6 +2424,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2410,6 +2460,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2445,6 +2496,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2480,6 +2532,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2515,6 +2568,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2550,6 +2604,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
index 69a3835cd4d678..ab0e75ef09e53e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgeu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -953,6 +971,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1001,6 +1020,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1049,6 +1069,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1097,6 +1118,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1145,6 +1167,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1193,6 +1216,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgeu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1241,6 +1265,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1289,6 +1314,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1337,6 +1363,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1385,6 +1412,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1433,6 +1461,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1481,6 +1510,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1529,6 +1559,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1577,6 +1608,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1625,6 +1657,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1700,6 +1733,7 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1775,6 +1809,7 @@ define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1850,6 +1885,7 @@ define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1885,6 +1921,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1920,6 +1957,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1955,6 +1993,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1975,6 +2014,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i8_i8_1(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i8_i8_1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: li a1, 99
; CHECK-NEXT: vmv1r.v v0, v9
@@ -2011,6 +2051,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2046,6 +2087,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2081,6 +2123,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2116,6 +2159,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2214,6 +2258,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2249,6 +2294,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2284,6 +2330,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2319,6 +2366,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2354,6 +2402,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2389,6 +2438,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2424,6 +2474,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2459,6 +2510,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2494,6 +2546,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2529,6 +2582,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
index d7dee2e1bc580e..a0b1ac655a0dd2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgt.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgt.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
index fe9d522f6b401b..c29df31a3e6008 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgtu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgtu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsif.ll b/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
index 1dc52eb55455ba..b666ab5ab87b1a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
@@ -31,6 +31,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsif.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsif_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -73,6 +74,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsif.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsif_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -115,6 +117,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsif.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsif_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -157,6 +160,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsif.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsif_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -199,6 +203,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsif.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsif_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -241,6 +246,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsif.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsif_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -283,6 +289,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsif.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsif_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
index bc98b31957b255..c849b41461eb4e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsle.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsle.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
index 731989cfe15d95..ccd70da73aabb5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsleu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsleu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
index 407f85b4f5996b..c60f1d4ce0d2e5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmslt.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmslt.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
index e051b332018fd5..a9f4dc45a9aebd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsltu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsltu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
index 1e21b847ed20d8..43f09561bf7526 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
@@ -34,6 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -85,6 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -136,6 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -187,6 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -238,6 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -289,6 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsne.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -340,6 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -391,6 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -442,6 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -493,6 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -544,6 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -595,6 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -646,6 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -697,6 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -748,6 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -799,6 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -850,6 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -901,6 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -952,6 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -999,6 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1046,6 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1093,6 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1140,6 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1187,6 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsne.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1234,6 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1281,6 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1328,6 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1375,6 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1422,6 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1469,6 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1516,6 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1563,6 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1610,6 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1684,6 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1758,6 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1832,6 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1867,6 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1902,6 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1937,6 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -1972,6 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2007,6 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2042,6 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2077,6 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2112,6 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2147,6 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2182,6 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2217,6 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2252,6 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2287,6 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2322,6 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2357,6 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2392,6 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2427,6 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2462,6 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsof.ll b/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
index b0a28e6e455b07..e80131315b5e79 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
@@ -31,6 +31,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsof.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsof_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -73,6 +74,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsof.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsof_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -115,6 +117,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsof.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsof_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -157,6 +160,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsof.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsof_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -199,6 +203,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsof.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsof_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -241,6 +246,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsof.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsof_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -283,6 +289,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsof.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsof_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll b/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
index 7f248a39b54fa9..7107063693a08d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
@@ -49,6 +49,7 @@ define <vscale x 4 x i32> @vadd_same_passthru(<vscale x 4 x i32> %passthru, <vsc
define <vscale x 4 x i32> @unfoldable_diff_avl_unknown(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: unfoldable_diff_avl_unknown:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v14, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v14, v10, v12
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll b/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
index f7ca65801dc874..fa2f750c6000c3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
@@ -5,6 +5,7 @@
define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
; RV32-LABEL: bool_vec:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v9, v0
; RV32-NEXT: vmv1r.v v0, v8
; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -17,6 +18,7 @@ define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
;
; RV64-LABEL: bool_vec:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v9, v0
; RV64-NEXT: slli a0, a0, 32
; RV64-NEXT: srli a0, a0, 32
@@ -35,6 +37,7 @@ define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
define iXLen @bool_vec_zero_poison(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
; RV32-LABEL: bool_vec_zero_poison:
; RV32: # %bb.0:
+; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v9, v0
; RV32-NEXT: vmv1r.v v0, v8
; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -43,6 +46,7 @@ define iXLen @bool_vec_zero_poison(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m,
;
; RV64-LABEL: bool_vec_zero_poison:
; RV64: # %bb.0:
+; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v9, v0
; RV64-NEXT: slli a0, a0, 32
; RV64-NEXT: srli a0, a0, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-select.ll b/llvm/test/CodeGen/RISCV/rvv/vp-select.ll
index c8a048971a803d..5aaf33a4c4406e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-select.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-select.ll
@@ -12,6 +12,7 @@ define <vscale x 1 x i64> @all_ones(<vscale x 1 x i64> %true, <vscale x 1 x i64>
define <vscale x 1 x i64> @all_zeroes(<vscale x 1 x i64> %true, <vscale x 1 x i64> %false, i32 %evl) {
; CHECK-LABEL: all_zeroes:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%v = call <vscale x 1 x i64> @llvm.vp.select.nxv1i64(<vscale x 1 x i1> splat (i1 false), <vscale x 1 x i64> %true, <vscale x 1 x i64> %false, i32 %evl)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
index 2a137099bcb0f4..a03a12871fc470 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
@@ -10,6 +10,7 @@ declare <16 x i1> @llvm.experimental.vp.splice.v16i1(<16 x i1>, <16 x i1>, i32,
define <2 x i1> @test_vp_splice_v2i1(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -34,6 +35,7 @@ define <2 x i1> @test_vp_splice_v2i1(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %ev
define <2 x i1> @test_vp_splice_v2i1_negative_offset(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -58,6 +60,7 @@ define <2 x i1> @test_vp_splice_v2i1_negative_offset(<2 x i1> %va, <2 x i1> %vb,
define <2 x i1> @test_vp_splice_v2i1_masked(<2 x i1> %va, <2 x i1> %vb, <2 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -83,6 +86,7 @@ define <2 x i1> @test_vp_splice_v2i1_masked(<2 x i1> %va, <2 x i1> %vb, <2 x i1>
define <4 x i1> @test_vp_splice_v4i1(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -107,6 +111,7 @@ define <4 x i1> @test_vp_splice_v4i1(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %ev
define <4 x i1> @test_vp_splice_v4i1_negative_offset(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -131,6 +136,7 @@ define <4 x i1> @test_vp_splice_v4i1_negative_offset(<4 x i1> %va, <4 x i1> %vb,
define <4 x i1> @test_vp_splice_v4i1_masked(<4 x i1> %va, <4 x i1> %vb, <4 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -156,6 +162,7 @@ define <4 x i1> @test_vp_splice_v4i1_masked(<4 x i1> %va, <4 x i1> %vb, <4 x i1>
define <8 x i1> @test_vp_splice_v8i1(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -180,6 +187,7 @@ define <8 x i1> @test_vp_splice_v8i1(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %ev
define <8 x i1> @test_vp_splice_v8i1_negative_offset(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -204,6 +212,7 @@ define <8 x i1> @test_vp_splice_v8i1_negative_offset(<8 x i1> %va, <8 x i1> %vb,
define <8 x i1> @test_vp_splice_v8i1_masked(<8 x i1> %va, <8 x i1> %vb, <8 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -229,6 +238,7 @@ define <8 x i1> @test_vp_splice_v8i1_masked(<8 x i1> %va, <8 x i1> %vb, <8 x i1>
define <16 x i1> @test_vp_splice_v16i1(<16 x i1> %va, <16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -253,6 +263,7 @@ define <16 x i1> @test_vp_splice_v16i1(<16 x i1> %va, <16 x i1> %vb, i32 zeroext
define <16 x i1> @test_vp_splice_v16i1_negative_offset(<16 x i1> %va, <16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -277,6 +288,7 @@ define <16 x i1> @test_vp_splice_v16i1_negative_offset(<16 x i1> %va, <16 x i1>
define <16 x i1> @test_vp_splice_v16i1_masked(<16 x i1> %va, <16 x i1> %vb, <16 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
index fc446d0a3a88ac..39e77b124aee28 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
@@ -13,6 +13,7 @@ declare <vscale x 64 x i1> @llvm.experimental.vp.splice.nxv64i1(<vscale x 64 x i
define <vscale x 1 x i1> @test_vp_splice_nxv1i1(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -37,6 +38,7 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1(<vscale x 1 x i1> %va, <vscale x
define <vscale x 1 x i1> @test_vp_splice_nxv1i1_negative_offset(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -61,6 +63,7 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1_negative_offset(<vscale x 1 x i1
define <vscale x 1 x i1> @test_vp_splice_nxv1i1_masked(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, <vscale x 1 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -86,6 +89,7 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1_masked(<vscale x 1 x i1> %va, <v
define <vscale x 2 x i1> @test_vp_splice_nxv2i1(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -110,6 +114,7 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1(<vscale x 2 x i1> %va, <vscale x
define <vscale x 2 x i1> @test_vp_splice_nxv2i1_negative_offset(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -134,6 +139,7 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1_negative_offset(<vscale x 2 x i1
define <vscale x 2 x i1> @test_vp_splice_nxv2i1_masked(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, <vscale x 2 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -159,6 +165,7 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1_masked(<vscale x 2 x i1> %va, <v
define <vscale x 4 x i1> @test_vp_splice_nxv4i1(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -183,6 +190,7 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1(<vscale x 4 x i1> %va, <vscale x
define <vscale x 4 x i1> @test_vp_splice_nxv4i1_negative_offset(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -207,6 +215,7 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1_negative_offset(<vscale x 4 x i1
define <vscale x 4 x i1> @test_vp_splice_nxv4i1_masked(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, <vscale x 4 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -232,6 +241,7 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1_masked(<vscale x 4 x i1> %va, <v
define <vscale x 8 x i1> @test_vp_splice_nxv8i1(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -256,6 +266,7 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1(<vscale x 8 x i1> %va, <vscale x
define <vscale x 8 x i1> @test_vp_splice_nxv8i1_negative_offset(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -280,6 +291,7 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1_negative_offset(<vscale x 8 x i1
define <vscale x 8 x i1> @test_vp_splice_nxv8i1_masked(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, <vscale x 8 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -305,6 +317,7 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1_masked(<vscale x 8 x i1> %va, <v
define <vscale x 16 x i1> @test_vp_splice_nxv16i1(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -329,6 +342,7 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1(<vscale x 16 x i1> %va, <vscal
define <vscale x 16 x i1> @test_vp_splice_nxv16i1_negative_offset(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -353,6 +367,7 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1_negative_offset(<vscale x 16 x
define <vscale x 16 x i1> @test_vp_splice_nxv16i1_masked(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, <vscale x 16 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -379,6 +394,7 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1_masked(<vscale x 16 x i1> %va,
define <vscale x 32 x i1> @test_vp_splice_nxv32i1(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -403,6 +419,7 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1(<vscale x 32 x i1> %va, <vscal
define <vscale x 32 x i1> @test_vp_splice_nxv32i1_negative_offset(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -427,6 +444,7 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1_negative_offset(<vscale x 32 x
define <vscale x 32 x i1> @test_vp_splice_nxv32i1_masked(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, <vscale x 32 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -453,6 +471,7 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1_masked(<vscale x 32 x i1> %va,
define <vscale x 64 x i1> @test_vp_splice_nxv64i1(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -477,6 +496,7 @@ define <vscale x 64 x i1> @test_vp_splice_nxv64i1(<vscale x 64 x i1> %va, <vscal
define <vscale x 64 x i1> @test_vp_splice_nxv64i1_negative_offset(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1_negative_offset:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -501,6 +521,7 @@ define <vscale x 64 x i1> @test_vp_splice_nxv64i1_negative_offset(<vscale x 64 x
define <vscale x 64 x i1> @test_vp_splice_nxv64i1_masked(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, <vscale x 64 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1_masked:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll
index 3e423c8ec99030..ca52ce6e2c4a1a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll
@@ -258,12 +258,12 @@ declare <vscale x 32 x i8> @llvm.vp.gather.nxv32i8.nxv32p0(<vscale x 32 x ptr>,
define <vscale x 32 x i8> @vpgather_baseidx_nxv32i8(ptr %base, <vscale x 32 x i8> %idxs, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv32i8:
; RV32: # %bb.0:
+; RV32-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: csrr a3, vlenb
; RV32-NEXT: slli a2, a3, 1
; RV32-NEXT: srli a3, a3, 2
; RV32-NEXT: sub a4, a1, a2
-; RV32-NEXT: vsetvli a5, zero, e8, mf2, ta, ma
; RV32-NEXT: vslidedown.vx v0, v0, a3
; RV32-NEXT: sltu a3, a1, a4
; RV32-NEXT: addi a3, a3, -1
@@ -285,12 +285,12 @@ define <vscale x 32 x i8> @vpgather_baseidx_nxv32i8(ptr %base, <vscale x 32 x i8
;
; RV64-LABEL: vpgather_baseidx_nxv32i8:
; RV64: # %bb.0:
+; RV64-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: csrr a2, vlenb
; RV64-NEXT: slli a3, a2, 1
; RV64-NEXT: srli a4, a2, 2
; RV64-NEXT: sub a5, a1, a3
-; RV64-NEXT: vsetvli a6, zero, e8, mf2, ta, ma
; RV64-NEXT: vslidedown.vx v13, v0, a4
; RV64-NEXT: sltu a4, a1, a5
; RV64-NEXT: addi a4, a4, -1
@@ -2457,11 +2457,11 @@ declare <vscale x 16 x double> @llvm.vp.gather.nxv16f64.nxv16p0(<vscale x 16 x p
define <vscale x 16 x double> @vpgather_nxv16f64(<vscale x 16 x ptr> %ptrs, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_nxv16f64:
; RV32: # %bb.0:
+; RV32-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; RV32-NEXT: vmv1r.v v24, v0
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: sub a2, a0, a1
; RV32-NEXT: srli a3, a1, 3
-; RV32-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; RV32-NEXT: vslidedown.vx v0, v0, a3
; RV32-NEXT: sltu a3, a0, a2
; RV32-NEXT: addi a3, a3, -1
@@ -2480,11 +2480,11 @@ define <vscale x 16 x double> @vpgather_nxv16f64(<vscale x 16 x ptr> %ptrs, <vsc
;
; RV64-LABEL: vpgather_nxv16f64:
; RV64: # %bb.0:
+; RV64-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; RV64-NEXT: vmv1r.v v24, v0
; RV64-NEXT: csrr a1, vlenb
; RV64-NEXT: sub a2, a0, a1
; RV64-NEXT: srli a3, a1, 3
-; RV64-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; RV64-NEXT: vslidedown.vx v0, v0, a3
; RV64-NEXT: sltu a3, a0, a2
; RV64-NEXT: addi a3, a3, -1
@@ -2506,8 +2506,8 @@ define <vscale x 16 x double> @vpgather_nxv16f64(<vscale x 16 x ptr> %ptrs, <vsc
define <vscale x 16 x double> @vpgather_baseidx_nxv16i16_nxv16f64(ptr %base, <vscale x 16 x i16> %idxs, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv16i16_nxv16f64:
; RV32: # %bb.0:
-; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: vsetvli a2, zero, e32, m8, ta, ma
+; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: vsext.vf2 v16, v8
; RV32-NEXT: csrr a2, vlenb
; RV32-NEXT: vsll.vi v24, v16, 3
@@ -2531,8 +2531,8 @@ define <vscale x 16 x double> @vpgather_baseidx_nxv16i16_nxv16f64(ptr %base, <vs
;
; RV64-LABEL: vpgather_baseidx_nxv16i16_nxv16f64:
; RV64: # %bb.0:
-; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: vsext.vf4 v16, v10
; RV64-NEXT: csrr a2, vlenb
; RV64-NEXT: vsll.vi v16, v16, 3
@@ -2564,8 +2564,8 @@ define <vscale x 16 x double> @vpgather_baseidx_nxv16i16_nxv16f64(ptr %base, <vs
define <vscale x 16 x double> @vpgather_baseidx_sext_nxv16i16_nxv16f64(ptr %base, <vscale x 16 x i16> %idxs, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv16i16_nxv16f64:
; RV32: # %bb.0:
-; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: vsetvli a2, zero, e32, m8, ta, ma
+; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: vsext.vf2 v16, v8
; RV32-NEXT: csrr a2, vlenb
; RV32-NEXT: vsll.vi v24, v16, 3
@@ -2589,8 +2589,8 @@ define <vscale x 16 x double> @vpgather_baseidx_sext_nxv16i16_nxv16f64(ptr %base
;
; RV64-LABEL: vpgather_baseidx_sext_nxv16i16_nxv16f64:
; RV64: # %bb.0:
-; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: vsext.vf4 v16, v10
; RV64-NEXT: csrr a2, vlenb
; RV64-NEXT: vsll.vi v16, v16, 3
@@ -2623,8 +2623,8 @@ define <vscale x 16 x double> @vpgather_baseidx_sext_nxv16i16_nxv16f64(ptr %base
define <vscale x 16 x double> @vpgather_baseidx_zext_nxv16i16_nxv16f64(ptr %base, <vscale x 16 x i16> %idxs, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv16i16_nxv16f64:
; RV32: # %bb.0:
-; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: vsetvli a2, zero, e32, m8, ta, ma
+; RV32-NEXT: vmv1r.v v12, v0
; RV32-NEXT: vzext.vf2 v16, v8
; RV32-NEXT: csrr a2, vlenb
; RV32-NEXT: vsll.vi v24, v16, 3
@@ -2648,8 +2648,8 @@ define <vscale x 16 x double> @vpgather_baseidx_zext_nxv16i16_nxv16f64(ptr %base
;
; RV64-LABEL: vpgather_baseidx_zext_nxv16i16_nxv16f64:
; RV64: # %bb.0:
-; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: vsetvli a2, zero, e32, m8, ta, ma
+; RV64-NEXT: vmv1r.v v12, v0
; RV64-NEXT: vzext.vf2 v16, v8
; RV64-NEXT: csrr a2, vlenb
; RV64-NEXT: vsll.vi v24, v16, 3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpload.ll b/llvm/test/CodeGen/RISCV/rvv/vpload.ll
index bd7ea6c19d0b30..74f642829e0d09 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpload.ll
@@ -522,12 +522,12 @@ declare <vscale x 16 x double> @llvm.vp.load.nxv16f64.p0(ptr, <vscale x 16 x i1>
define <vscale x 16 x double> @vpload_nxv16f64(ptr %ptr, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpload_nxv16f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a2, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: sub a3, a1, a2
; CHECK-NEXT: slli a4, a2, 3
; CHECK-NEXT: srli a5, a2, 3
-; CHECK-NEXT: vsetvli a6, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a5
; CHECK-NEXT: sltu a5, a1, a3
; CHECK-NEXT: addi a5, a5, -1
@@ -561,6 +561,7 @@ declare <vscale x 16 x double> @llvm.vector.extract.nxv16f64(<vscale x 17 x doub
define <vscale x 16 x double> @vpload_nxv17f64(ptr %ptr, ptr %out, <vscale x 17 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpload_nxv17f64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: csrr a3, vlenb
; CHECK-NEXT: slli a5, a3, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll
index f029d0b1b01bc0..88a8ebcc90054b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll
@@ -361,12 +361,12 @@ define <vscale x 128 x i8> @vpmerge_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vmv8r.v v24, v16
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: vsetvli a4, zero, e8, m8, ta, ma
; CHECK-NEXT: vlm.v v0, (a2)
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: add a2, a0, a1
@@ -401,8 +401,8 @@ define <vscale x 128 x i8> @vpmerge_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
define <vscale x 128 x i8> @vpmerge_vx_nxv128i8(i8 %a, <vscale x 128 x i8> %vb, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpmerge_vx_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a3, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a1)
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -429,8 +429,8 @@ define <vscale x 128 x i8> @vpmerge_vx_nxv128i8(i8 %a, <vscale x 128 x i8> %vb,
define <vscale x 128 x i8> @vpmerge_vi_nxv128i8(<vscale x 128 x i8> %vb, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpmerge_vi_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a0)
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpstore.ll b/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
index 8978dc268d4e52..35f922734c57a8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
@@ -468,6 +468,7 @@ define void @vpstore_nxv17f64(<vscale x 17 x double> %val, ptr %ptr, <vscale x 1
; CHECK-NEXT: slli a3, a3, 3
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v16, (a3) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
index 39666bb6119a0f..2f1c3d80ef88d7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
@@ -23,6 +23,7 @@ declare i1 @llvm.vp.reduce.or.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>, i
define zeroext i1 @vpreduce_or_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -39,6 +40,7 @@ declare i1 @llvm.vp.reduce.xor.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_xor_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -71,6 +73,7 @@ declare i1 @llvm.vp.reduce.or.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>, i
define zeroext i1 @vpreduce_or_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -87,6 +90,7 @@ declare i1 @llvm.vp.reduce.xor.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_xor_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -119,6 +123,7 @@ declare i1 @llvm.vp.reduce.or.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>, i
define zeroext i1 @vpreduce_or_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -135,6 +140,7 @@ declare i1 @llvm.vp.reduce.xor.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_xor_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -167,6 +173,7 @@ declare i1 @llvm.vp.reduce.or.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>, i
define zeroext i1 @vpreduce_or_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -183,6 +190,7 @@ declare i1 @llvm.vp.reduce.xor.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_xor_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -215,6 +223,7 @@ declare i1 @llvm.vp.reduce.or.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1>
define zeroext i1 @vpreduce_or_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -231,6 +240,7 @@ declare i1 @llvm.vp.reduce.xor.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1
define zeroext i1 @vpreduce_xor_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -263,6 +273,7 @@ declare i1 @llvm.vp.reduce.or.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1>
define zeroext i1 @vpreduce_or_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -279,6 +290,7 @@ declare i1 @llvm.vp.reduce.xor.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1
define zeroext i1 @vpreduce_xor_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -295,6 +307,7 @@ declare i1 @llvm.vp.reduce.or.nxv40i1(i1, <vscale x 40 x i1>, <vscale x 40 x i1>
define zeroext i1 @vpreduce_or_nxv40i1(i1 zeroext %s, <vscale x 40 x i1> %v, <vscale x 40 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv40i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -327,6 +340,7 @@ declare i1 @llvm.vp.reduce.or.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1>
define zeroext i1 @vpreduce_or_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -343,6 +357,7 @@ declare i1 @llvm.vp.reduce.xor.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1
define zeroext i1 @vpreduce_xor_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -359,6 +374,7 @@ declare i1 @llvm.vp.reduce.or.nxv128i1(i1, <vscale x 128 x i1>, <vscale x 128 x
define zeroext i1 @vpreduce_or_nxv128i1(i1 zeroext %s, <vscale x 128 x i1> %v, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv128i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: slli a2, a2, 3
@@ -390,6 +406,7 @@ declare i1 @llvm.vp.reduce.add.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_add_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -406,6 +423,7 @@ declare i1 @llvm.vp.reduce.add.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_add_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -422,6 +440,7 @@ declare i1 @llvm.vp.reduce.add.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_add_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -438,6 +457,7 @@ declare i1 @llvm.vp.reduce.add.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_add_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -454,6 +474,7 @@ declare i1 @llvm.vp.reduce.add.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1
define zeroext i1 @vpreduce_add_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -470,6 +491,7 @@ declare i1 @llvm.vp.reduce.add.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1
define zeroext i1 @vpreduce_add_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -486,6 +508,7 @@ declare i1 @llvm.vp.reduce.add.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1
define zeroext i1 @vpreduce_add_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -615,6 +638,7 @@ declare i1 @llvm.vp.reduce.smin.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_smin_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -631,6 +655,7 @@ declare i1 @llvm.vp.reduce.smin.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_smin_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -647,6 +672,7 @@ declare i1 @llvm.vp.reduce.smin.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_smin_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -663,6 +689,7 @@ declare i1 @llvm.vp.reduce.smin.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_smin_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -679,6 +706,7 @@ declare i1 @llvm.vp.reduce.smin.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i
define zeroext i1 @vpreduce_smin_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -695,6 +723,7 @@ declare i1 @llvm.vp.reduce.smin.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i
define zeroext i1 @vpreduce_smin_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -711,6 +740,7 @@ declare i1 @llvm.vp.reduce.smin.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i
define zeroext i1 @vpreduce_smin_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -727,6 +757,7 @@ declare i1 @llvm.vp.reduce.umax.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_umax_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv1i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -743,6 +774,7 @@ declare i1 @llvm.vp.reduce.umax.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_umax_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv2i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -759,6 +791,7 @@ declare i1 @llvm.vp.reduce.umax.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_umax_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv4i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -775,6 +808,7 @@ declare i1 @llvm.vp.reduce.umax.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_umax_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv8i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -791,6 +825,7 @@ declare i1 @llvm.vp.reduce.umax.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i
define zeroext i1 @vpreduce_umax_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv16i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -807,6 +842,7 @@ declare i1 @llvm.vp.reduce.umax.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i
define zeroext i1 @vpreduce_umax_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv32i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -823,6 +859,7 @@ declare i1 @llvm.vp.reduce.umax.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i
define zeroext i1 @vpreduce_umax_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv64i1:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vrgatherei16-subreg-liveness.ll b/llvm/test/CodeGen/RISCV/rvv/vrgatherei16-subreg-liveness.ll
index 1779fc12095e88..7b460f2c058f85 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vrgatherei16-subreg-liveness.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vrgatherei16-subreg-liveness.ll
@@ -22,8 +22,8 @@ define internal void @foo(<vscale x 1 x i16> %v15, <vscale x 1 x i16> %0, <vscal
; NOSUBREG-NEXT: .LBB0_1: # %loopIR3.i.i
; NOSUBREG-NEXT: # =>This Inner Loop Header: Depth=1
; NOSUBREG-NEXT: vl1r.v v9, (zero)
-; NOSUBREG-NEXT: vmv1r.v v13, v12
; NOSUBREG-NEXT: vsetivli zero, 4, e8, m1, tu, ma
+; NOSUBREG-NEXT: vmv1r.v v13, v12
; NOSUBREG-NEXT: vrgatherei16.vv v13, v9, v10
; NOSUBREG-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOSUBREG-NEXT: vand.vv v9, v8, v13
@@ -42,8 +42,8 @@ define internal void @foo(<vscale x 1 x i16> %v15, <vscale x 1 x i16> %0, <vscal
; SUBREG-NEXT: .LBB0_1: # %loopIR3.i.i
; SUBREG-NEXT: # =>This Inner Loop Header: Depth=1
; SUBREG-NEXT: vl1r.v v9, (zero)
-; SUBREG-NEXT: vmv1r.v v13, v12
; SUBREG-NEXT: vsetivli zero, 4, e8, m1, tu, ma
+; SUBREG-NEXT: vmv1r.v v13, v12
; SUBREG-NEXT: vrgatherei16.vv v13, v9, v10
; SUBREG-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SUBREG-NEXT: vand.vv v9, v8, v13
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsadd-vp.ll
index 12c439346e3569..3421c6af334bc0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsadd-vp.ll
@@ -572,8 +572,8 @@ declare <vscale x 128 x i8> @llvm.vp.sadd.sat.nxv128i8(<vscale x 128 x i8>, <vsc
define <vscale x 128 x i8> @vsadd_vi_nxv128i8(<vscale x 128 x i8> %va, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsadd_vi_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a0)
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
@@ -1350,11 +1350,11 @@ declare <vscale x 32 x i32> @llvm.vp.sadd.sat.nxv32i32(<vscale x 32 x i32>, <vsc
define <vscale x 32 x i32> @vsadd_vi_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsadd_vi_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsaddu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsaddu-vp.ll
index d962f703abfd22..180e0799044e8b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsaddu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsaddu-vp.ll
@@ -571,8 +571,8 @@ declare <vscale x 128 x i8> @llvm.vp.uadd.sat.nxv128i8(<vscale x 128 x i8>, <vsc
define <vscale x 128 x i8> @vsaddu_vi_nxv128i8(<vscale x 128 x i8> %va, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsaddu_vi_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a0)
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
@@ -1349,11 +1349,11 @@ declare <vscale x 32 x i32> @llvm.vp.uadd.sat.nxv32i32(<vscale x 32 x i32>, <vsc
define <vscale x 32 x i32> @vsaddu_vi_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsaddu_vi_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll
index a63d14e8b6c04e..b81986a13bd670 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll
@@ -126,6 +126,7 @@ define <vscale x 8 x bfloat> @vmerge_truelhs_nxv8bf16_0(<vscale x 8 x bfloat> %v
define <vscale x 8 x bfloat> @vmerge_falselhs_nxv8bf16_0(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb) {
; CHECK-LABEL: vmerge_falselhs_nxv8bf16_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%vc = select <vscale x 8 x i1> zeroinitializer, <vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll
index 1fc33dc73a27dc..a2b4f308b6a323 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll
@@ -175,6 +175,7 @@ define <vscale x 8 x half> @vmerge_truelhs_nxv8f16_0(<vscale x 8 x half> %va, <v
define <vscale x 8 x half> @vmerge_falselhs_nxv8f16_0(<vscale x 8 x half> %va, <vscale x 8 x half> %vb) {
; CHECK-LABEL: vmerge_falselhs_nxv8f16_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%vc = select <vscale x 8 x i1> zeroinitializer, <vscale x 8 x half> %va, <vscale x 8 x half> %vb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll
index 9cafa28eb429f1..8179374cb8c7f4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll
@@ -803,6 +803,7 @@ define <vscale x 8 x i64> @vmerge_truelhs_nxv8i64_0(<vscale x 8 x i64> %va, <vsc
define <vscale x 8 x i64> @vmerge_falselhs_nxv8i64_0(<vscale x 8 x i64> %va, <vscale x 8 x i64> %vb) {
; CHECK-LABEL: vmerge_falselhs_nxv8i64_0:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v8, v16
; CHECK-NEXT: ret
%vc = select <vscale x 8 x i1> zeroinitializer, <vscale x 8 x i64> %va, <vscale x 8 x i64> %vb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
index bb51f0592dc17a..a7c521cd464369 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
@@ -362,6 +362,7 @@ define <vscale x 32 x i32> @select_nxv32i32(<vscale x 32 x i1> %a, <vscale x 32
; CHECK-NEXT: add a1, sp, a1
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a3, vlenb
; CHECK-NEXT: slli a4, a3, 3
@@ -375,7 +376,6 @@ define <vscale x 32 x i32> @select_nxv32i32(<vscale x 32 x i1> %a, <vscale x 32
; CHECK-NEXT: vl8re32.v v0, (a0)
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs8r.v v0, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v24, a3
; CHECK-NEXT: and a4, a4, a5
; CHECK-NEXT: vsetvli zero, a4, e32, m8, ta, ma
@@ -421,6 +421,7 @@ define <vscale x 32 x i32> @select_evl_nxv32i32(<vscale x 32 x i1> %a, <vscale x
; CHECK-NEXT: add a1, sp, a1
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a3, a1, 3
@@ -434,7 +435,6 @@ define <vscale x 32 x i32> @select_evl_nxv32i32(<vscale x 32 x i1> %a, <vscale x
; CHECK-NEXT: vl8re32.v v0, (a0)
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs8r.v v0, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v24, a4
; CHECK-NEXT: and a3, a3, a5
; CHECK-NEXT: vsetvli zero, a3, e32, m8, ta, ma
@@ -710,6 +710,7 @@ define <vscale x 16 x double> @select_nxv16f64(<vscale x 16 x i1> %a, <vscale x
; CHECK-NEXT: add a1, sp, a1
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a3, a1, 3
@@ -722,7 +723,6 @@ define <vscale x 16 x double> @select_nxv16f64(<vscale x 16 x i1> %a, <vscale x
; CHECK-NEXT: vl8re64.v v0, (a0)
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs8r.v v0, (a0) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetvli a0, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v24, a3
; CHECK-NEXT: and a4, a5, a4
; CHECK-NEXT: vsetvli zero, a4, e64, m8, ta, ma
@@ -835,6 +835,7 @@ define <vscale x 2 x i1> @select_cond_x_cond(<vscale x 2 x i1> %x, <vscale x 2 x
define <vscale x 2 x i1> @select_undef_T_F(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_undef_T_F:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> poison, <vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 %evl)
@@ -852,6 +853,7 @@ define <vscale x 2 x i1> @select_undef_undef_F(<vscale x 2 x i1> %x, i32 zeroext
define <vscale x 2 x i1> @select_unknown_undef_F(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_unknown_undef_F:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> %x, <vscale x 2 x i1> undef, <vscale x 2 x i1> %y, i32 %evl)
@@ -861,6 +863,7 @@ define <vscale x 2 x i1> @select_unknown_undef_F(<vscale x 2 x i1> %x, <vscale x
define <vscale x 2 x i1> @select_unknown_T_undef(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_unknown_T_undef:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, <vscale x 2 x i1> poison, i32 %evl)
@@ -870,6 +873,7 @@ define <vscale x 2 x i1> @select_unknown_T_undef(<vscale x 2 x i1> %x, <vscale x
define <vscale x 2 x i1> @select_false_T_F(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, <vscale x 2 x i1> %z, i32 zeroext %evl) {
; CHECK-LABEL: select_false_T_F:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> zeroinitializer, <vscale x 2 x i1> %y, <vscale x 2 x i1> %z, i32 %evl)
@@ -879,6 +883,7 @@ define <vscale x 2 x i1> @select_false_T_F(<vscale x 2 x i1> %x, <vscale x 2 x i
define <vscale x 2 x i1> @select_unknown_T_T(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_unknown_T_T:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, <vscale x 2 x i1> %y, i32 %evl)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
index 33acfb7dceb949..8186c67bf93e07 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
@@ -18,11 +18,11 @@ declare <vscale x 1 x i64> @llvm.riscv.vle.mask.nxv1i64(
define <2 x double> @fixed_length(<2 x double> %a, <2 x double> %b) nounwind {
; CHECK-LABEL: fixed_length:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: # kill: def $v11 killed $v10
; CHECK-NEXT: # kill: def $v9 killed $v8
; CHECK-NEXT: # implicit-def: $v9
-; CHECK-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; CHECK-NEXT: vfadd.vv v9, v8, v10
; CHECK-NEXT: # implicit-def: $v8
; CHECK-NEXT: vfadd.vv v8, v9, v10
@@ -36,9 +36,9 @@ entry:
define <vscale x 1 x double> @scalable(<vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: scalable:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: # implicit-def: $v9
-; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vfadd.vv v9, v8, v10
; CHECK-NEXT: # implicit-def: $v8
; CHECK-NEXT: vfadd.vv v8, v9, v10
@@ -53,8 +53,8 @@ entry:
define <vscale x 1 x double> @intrinsic_same_vlmax(<vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: intrinsic_same_vlmax:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: vsetvli a0, zero, e32, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: # implicit-def: $v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, tu, ma
; CHECK-NEXT: vfadd.vv v9, v8, v10
@@ -81,8 +81,8 @@ entry:
define <vscale x 1 x double> @intrinsic_same_avl_imm(<vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: intrinsic_same_avl_imm:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: vsetivli a0, 2, e32, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: # implicit-def: $v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, tu, ma
; CHECK-NEXT: vfadd.vv v9, v8, v10
@@ -108,6 +108,7 @@ entry:
define <vscale x 1 x double> @intrinsic_same_avl_reg(i64 %avl, <vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: intrinsic_same_avl_reg:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: vsetvli a0, a0, e32, mf2, ta, ma
; CHECK-NEXT: # implicit-def: $v9
@@ -135,6 +136,7 @@ entry:
define <vscale x 1 x double> @intrinsic_diff_avl_reg(i64 %avl, i64 %avl2, <vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: intrinsic_diff_avl_reg:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: vsetvli a0, a0, e32, mf2, ta, ma
; CHECK-NEXT: # implicit-def: $v9
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll
index b0cb6bc6125ddf..2ca2803f3c746a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll
@@ -377,8 +377,8 @@ entry:
define <vscale x 1 x double> @test19(<vscale x 1 x double> %a, double %b) nounwind {
; CHECK-LABEL: test19:
; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vsetivli zero, 2, e64, m8, tu, ma
; CHECK-NEXT: vmv1r.v v9, v8
-; CHECK-NEXT: vsetivli zero, 2, e64, m1, tu, ma
; CHECK-NEXT: vfmv.s.f v9, fa0
; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
; CHECK-NEXT: vfadd.vv v8, v9, v8
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsext-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsext-vp.ll
index d3b905ef897b1b..3c91131fe4d121 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsext-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsext-vp.ll
@@ -151,11 +151,11 @@ declare <vscale x 32 x i32> @llvm.vp.sext.nxv32i32.nxv32i8(<vscale x 32 x i8>, <
define <vscale x 32 x i32> @vsext_nxv32i8_nxv32i32(<vscale x 32 x i8> %a, <vscale x 32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vsext_nxv32i8_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll
index 581cc666b6cbd5..44d3ee96f5e61d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll
@@ -508,11 +508,11 @@ declare <vscale x 32 x half> @llvm.vp.sitofp.nxv32f16.nxv32i32(<vscale x 32 x i3
define <vscale x 32 x half> @vsitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vsitofp_nxv32f16_nxv32i32:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; ZVFH-NEXT: vmv1r.v v24, v0
; ZVFH-NEXT: csrr a1, vlenb
; ZVFH-NEXT: srli a2, a1, 2
; ZVFH-NEXT: slli a1, a1, 1
-; ZVFH-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; ZVFH-NEXT: vslidedown.vx v0, v0, a2
; ZVFH-NEXT: sub a2, a0, a1
; ZVFH-NEXT: sltu a3, a0, a2
@@ -532,11 +532,11 @@ define <vscale x 32 x half> @vsitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va,
;
; ZVFHMIN-LABEL: vsitofp_nxv32f16_nxv32i32:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: srli a2, a1, 2
; ZVFHMIN-NEXT: slli a1, a1, 1
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vslidedown.vx v0, v0, a2
; ZVFHMIN-NEXT: sub a2, a0, a1
; ZVFHMIN-NEXT: sltu a3, a0, a2
@@ -566,11 +566,11 @@ declare <vscale x 32 x float> @llvm.vp.sitofp.nxv32f32.nxv32i32(<vscale x 32 x i
define <vscale x 32 x float> @vsitofp_nxv32f32_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsitofp_nxv32f32_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll
index f9c24eeec31c56..7ee6ea9e19df02 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll
@@ -590,8 +590,8 @@ declare <vscale x 128 x i8> @llvm.vp.ssub.sat.nxv128i8(<vscale x 128 x i8>, <vsc
define <vscale x 128 x i8> @vssub_vi_nxv128i8(<vscale x 128 x i8> %va, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssub_vi_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a0)
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
@@ -1392,11 +1392,11 @@ declare <vscale x 32 x i32> @llvm.vp.ssub.sat.nxv32i32(<vscale x 32 x i32>, <vsc
define <vscale x 32 x i32> @vssub_vi_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssub_vi_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll
index 04a1b522a8a33a..7674a457ca9617 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll
@@ -588,8 +588,8 @@ declare <vscale x 128 x i8> @llvm.vp.usub.sat.nxv128i8(<vscale x 128 x i8>, <vsc
define <vscale x 128 x i8> @vssubu_vi_nxv128i8(<vscale x 128 x i8> %va, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssubu_vi_nxv128i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: vlm.v v0, (a0)
; CHECK-NEXT: csrr a0, vlenb
; CHECK-NEXT: slli a0, a0, 3
@@ -1390,11 +1390,11 @@ declare <vscale x 32 x i32> @llvm.vp.usub.sat.nxv32i32(<vscale x 32 x i32>, <vsc
define <vscale x 32 x i32> @vssubu_vi_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssubu_vi_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll
index e62b7a00396388..fd5bf4ebcede82 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll
@@ -157,11 +157,11 @@ declare <vscale x 15 x i16> @llvm.vp.trunc.nxv15i16.nxv15i64(<vscale x 15 x i64>
define <vscale x 15 x i16> @vtrunc_nxv15i16_nxv15i64(<vscale x 15 x i64> %a, <vscale x 15 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vtrunc_nxv15i16_nxv15i64:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 3
; CHECK-NEXT: sub a3, a0, a1
-; CHECK-NEXT: vsetvli a4, zero, e8, mf4, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sltu a2, a0, a3
; CHECK-NEXT: addi a2, a2, -1
@@ -214,11 +214,11 @@ declare <vscale x 32 x i7> @llvm.vp.trunc.nxv32i7.nxv32i32(<vscale x 32 x i32>,
define <vscale x 32 x i7> @vtrunc_nxv32i7_nxv32i32(<vscale x 32 x i32> %a, <vscale x 32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vtrunc_nxv32i7_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
@@ -248,11 +248,11 @@ declare <vscale x 32 x i8> @llvm.vp.trunc.nxv32i8.nxv32i32(<vscale x 32 x i32>,
define <vscale x 32 x i8> @vtrunc_nxv32i8_nxv32i32(<vscale x 32 x i32> %a, <vscale x 32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vtrunc_nxv32i8_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
@@ -288,6 +288,7 @@ define <vscale x 32 x i32> @vtrunc_nxv32i64_nxv32i32(<vscale x 32 x i64> %a, <vs
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
@@ -301,7 +302,6 @@ define <vscale x 32 x i32> @vtrunc_nxv32i64_nxv32i32(<vscale x 32 x i64> %a, <vs
; CHECK-NEXT: srli a5, a1, 2
; CHECK-NEXT: slli a6, a1, 3
; CHECK-NEXT: slli a4, a1, 1
-; CHECK-NEXT: vsetvli a7, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v16, v0, a5
; CHECK-NEXT: add a6, a0, a6
; CHECK-NEXT: sub a5, a2, a4
diff --git a/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll
index 2c5a01279d5d37..f17cdd9c72bfcf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll
@@ -500,11 +500,11 @@ declare <vscale x 32 x half> @llvm.vp.uitofp.nxv32f16.nxv32i32(<vscale x 32 x i3
define <vscale x 32 x half> @vuitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vuitofp_nxv32f16_nxv32i32:
; ZVFH: # %bb.0:
+; ZVFH-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; ZVFH-NEXT: vmv1r.v v24, v0
; ZVFH-NEXT: csrr a1, vlenb
; ZVFH-NEXT: srli a2, a1, 2
; ZVFH-NEXT: slli a1, a1, 1
-; ZVFH-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; ZVFH-NEXT: vslidedown.vx v0, v0, a2
; ZVFH-NEXT: sub a2, a0, a1
; ZVFH-NEXT: sltu a3, a0, a2
@@ -524,11 +524,11 @@ define <vscale x 32 x half> @vuitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va,
;
; ZVFHMIN-LABEL: vuitofp_nxv32f16_nxv32i32:
; ZVFHMIN: # %bb.0:
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vmv1r.v v7, v0
; ZVFHMIN-NEXT: csrr a1, vlenb
; ZVFHMIN-NEXT: srli a2, a1, 2
; ZVFHMIN-NEXT: slli a1, a1, 1
-; ZVFHMIN-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; ZVFHMIN-NEXT: vslidedown.vx v0, v0, a2
; ZVFHMIN-NEXT: sub a2, a0, a1
; ZVFHMIN-NEXT: sltu a3, a0, a2
@@ -558,11 +558,11 @@ declare <vscale x 32 x float> @llvm.vp.uitofp.nxv32f32.nxv32i32(<vscale x 32 x i
define <vscale x 32 x float> @vuitofp_nxv32f32_nxv32i32(<vscale x 32 x i32> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vuitofp_nxv32f32_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/vzext-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vzext-vp.ll
index 10e655c8445409..934d7eb43ac2ad 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vzext-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vzext-vp.ll
@@ -151,11 +151,11 @@ declare <vscale x 32 x i32> @llvm.vp.zext.nxv32i32.nxv32i8(<vscale x 32 x i8>, <
define <vscale x 32 x i32> @vzext_nxv32i8_nxv32i32(<vscale x 32 x i8> %a, <vscale x 32 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vzext_nxv32i8_nxv32i32:
; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: srli a2, a1, 2
; CHECK-NEXT: slli a1, a1, 1
-; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vslidedown.vx v0, v0, a2
; CHECK-NEXT: sub a2, a0, a1
; CHECK-NEXT: sltu a3, a0, a2
>From 1d86ed1308f5b8699b5b8af6cc3b4a19bf4e181a Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Tue, 3 Dec 2024 00:02:46 +0800
Subject: [PATCH 2/9] Use getMinimalPhysRegClass and check TSFlags to check if
it's a whole vector copy
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 32 ++++++-------------
.../RISCV/rvv/vsetvli-insert-crossbb.mir | 2 +-
2 files changed, 11 insertions(+), 23 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 244a2d702bbbb4..74e2e69e6b2e31 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -27,6 +27,7 @@
#include "RISCV.h"
#include "RISCVSubtarget.h"
#include "llvm/ADT/Statistic.h"
+#include "llvm/Analysis/ValueTracking.h"
#include "llvm/CodeGen/LiveDebugVariables.h"
#include "llvm/CodeGen/LiveIntervals.h"
#include "llvm/CodeGen/LiveStacks.h"
@@ -196,24 +197,11 @@ static bool hasUndefinedPassthru(const MachineInstr &MI) {
}
/// Return true if \p MI is a copy that will be lowered to one or more vmvNr.vs.
-static bool isVecCopy(const MachineInstr &MI) {
- static const TargetRegisterClass *RVVRegClasses[] = {
- &RISCV::VRRegClass, &RISCV::VRM2RegClass, &RISCV::VRM4RegClass,
- &RISCV::VRM8RegClass, &RISCV::VRN2M1RegClass, &RISCV::VRN2M2RegClass,
- &RISCV::VRN2M4RegClass, &RISCV::VRN3M1RegClass, &RISCV::VRN3M2RegClass,
- &RISCV::VRN4M1RegClass, &RISCV::VRN4M2RegClass, &RISCV::VRN5M1RegClass,
- &RISCV::VRN6M1RegClass, &RISCV::VRN7M1RegClass, &RISCV::VRN8M1RegClass};
- if (!MI.isCopy())
- return false;
-
- Register DstReg = MI.getOperand(0).getReg();
- Register SrcReg = MI.getOperand(1).getReg();
- for (const auto &RegClass : RVVRegClasses) {
- if (RegClass->contains(DstReg, SrcReg)) {
- return true;
- }
- }
- return false;
+static bool isVectorCopy(const TargetRegisterInfo *TRI,
+ const MachineInstr &MI) {
+ return MI.isCopy() && MI.getOperand(0).getReg().isPhysical() &&
+ RISCVRegisterInfo::isRVVRegClass(
+ TRI->getMinimalPhysRegClass(MI.getOperand(0).getReg()));
}
/// Which subfields of VL or VTYPE have values we need to preserve?
@@ -537,7 +525,7 @@ DemandedFields getDemanded(const MachineInstr &MI, const RISCVSubtarget *ST) {
// However it does need valid SEW, i.e. vill must be cleared. The entry to a
// function, calls and inline assembly may all set it, so make sure we clear
// it for whole register copies.
- if (isVecCopy(MI))
+ if (isVectorCopy(ST->getRegisterInfo(), MI))
Res.VILL = true;
return Res;
@@ -1245,7 +1233,7 @@ static VSETVLIInfo adjustIncoming(VSETVLIInfo PrevInfo, VSETVLIInfo NewInfo,
// legal for MI, but may not be the state requested by MI.
void RISCVInsertVSETVLI::transferBefore(VSETVLIInfo &Info,
const MachineInstr &MI) const {
- if (isVecCopy(MI) &&
+ if (isVectorCopy(ST->getRegisterInfo(), MI) &&
(Info.isUnknown() || !Info.isValid() || Info.hasSEWLMULRatioOnly())) {
// Use an arbitrary but valid AVL and VTYPE so vill will be cleared. It may
// be coalesced into another vsetvli since we won't demand any fields.
@@ -1345,7 +1333,7 @@ bool RISCVInsertVSETVLI::computeVLVTYPEChanges(const MachineBasicBlock &MBB,
transferBefore(Info, MI);
if (isVectorConfigInstr(MI) || RISCVII::hasSEWOp(MI.getDesc().TSFlags) ||
- isVecCopy(MI))
+ isVectorCopy(ST->getRegisterInfo(), MI))
HadVectorOp = true;
transferAfter(Info, MI);
@@ -1475,7 +1463,7 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
PrefixTransparent = false;
}
- if (isVecCopy(MI) &&
+ if (isVectorCopy(ST->getRegisterInfo(), MI) &&
!PrevInfo.isCompatible(DemandedFields::all(), CurInfo, LIS)) {
insertVSETVLI(MBB, MI, MI.getDebugLoc(), CurInfo, PrevInfo);
PrefixTransparent = false;
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir
index bf93f5cc1f6f2c..55cefbbea81b20 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir
@@ -835,8 +835,8 @@ body: |
; CHECK-NEXT: PseudoBR %bb.3
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: bb.3:
+ ; CHECK-NEXT: dead [[PseudoVSETVLIX0_1:%[0-9]+]]:gpr = PseudoVSETVLIX0 $x0, 197 /* e8, mf8, ta, ma */, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $v0 = COPY %mask
- ; CHECK-NEXT: dead $x0 = PseudoVSETVLIX0 killed $x0, 197 /* e8, mf8, ta, ma */, implicit-def $vl, implicit-def $vtype, implicit $vl
; CHECK-NEXT: PseudoVSOXEI64_V_M1_MF8_MASK [[COPY]], %b, %idxs, $v0, -1, 3 /* e8 */, implicit $vl, implicit $vtype
; CHECK-NEXT: PseudoRET
bb.0:
>From 9440a860b15db0141b72af6aab18b3a25682bcef Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Tue, 3 Dec 2024 00:10:10 +0800
Subject: [PATCH 3/9] Remove unused include
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 1 -
1 file changed, 1 deletion(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 74e2e69e6b2e31..6a84bc63a55df8 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -27,7 +27,6 @@
#include "RISCV.h"
#include "RISCVSubtarget.h"
#include "llvm/ADT/Statistic.h"
-#include "llvm/Analysis/ValueTracking.h"
#include "llvm/CodeGen/LiveDebugVariables.h"
#include "llvm/CodeGen/LiveIntervals.h"
#include "llvm/CodeGen/LiveStacks.h"
>From cedea181e0d0dd07f40717a70989a7317ecb9419 Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Tue, 3 Dec 2024 00:14:28 +0800
Subject: [PATCH 4/9] Use an AVL of 1
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 2 +-
.../CodeGen/RISCV/inline-asm-v-constraint.ll | 4 +-
.../CodeGen/RISCV/rvv/calling-conv-fastcc.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/calling-conv.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/expandload.ll | 1708 +++++------------
.../CodeGen/RISCV/rvv/extract-subvector.ll | 38 +-
.../RISCV/rvv/fixed-vectors-bitreverse-vp.ll | 4 +-
.../rvv/fixed-vectors-calling-conv-fastcc.ll | 2 +-
.../RISCV/rvv/fixed-vectors-calling-conv.ll | 2 +-
.../RISCV/rvv/fixed-vectors-ceil-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-ctpop-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-floor-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-fmaximum-vp.ll | 24 +-
.../RISCV/rvv/fixed-vectors-fminimum-vp.ll | 24 +-
.../rvv/fixed-vectors-insert-subvector.ll | 4 +-
.../RISCV/rvv/fixed-vectors-masked-gather.ll | 98 +-
.../rvv/fixed-vectors-masked-load-int.ll | 2 +-
.../RISCV/rvv/fixed-vectors-nearbyint-vp.ll | 14 +-
.../rvv/fixed-vectors-reduction-mask-vp.ll | 60 +-
.../RISCV/rvv/fixed-vectors-rint-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-round-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-roundeven-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-roundtozero-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-setcc-int-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-shuffle-concat.ll | 2 +-
.../rvv/fixed-vectors-shuffle-exact-vlen.ll | 4 +-
.../rvv/fixed-vectors-shuffle-vslide1up.ll | 2 +-
.../fixed-vectors-strided-load-store-asm.ll | 2 +-
.../RISCV/rvv/fixed-vectors-strided-vpload.ll | 6 +-
.../RISCV/rvv/fixed-vectors-unaligned.ll | 8 +-
.../RISCV/rvv/fixed-vectors-vadd-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vmax-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vmaxu-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vmin-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vminu-vp.ll | 2 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vpload.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vsadd-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vsaddu-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vselect-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vssub-vp.ll | 2 +-
.../RISCV/rvv/fixed-vectors-vssubu-vp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/floor-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll | 28 +-
.../RISCV/rvv/fold-scalar-load-crash.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/inline-asm.ll | 14 +-
.../CodeGen/RISCV/rvv/insert-subvector.ll | 44 +-
llvm/test/CodeGen/RISCV/rvv/masked-tama.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll | 28 +-
llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/rint-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/round-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll | 20 +-
.../RISCV/rvv/rv32-spill-vector-csr.ll | 2 +-
.../RISCV/rvv/rv64-spill-vector-csr.ll | 2 +-
.../test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll | 2 +-
.../RISCV/rvv/rvv-peephole-vmerge-vops.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll | 8 +-
.../test/CodeGen/RISCV/rvv/strided-vpstore.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vcpop.ll | 14 +-
.../RISCV/rvv/vector-reassociations.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/vfirst.ll | 14 +-
llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vl-opt.ll | 2 +-
.../CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll | 330 ++--
.../CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll | 330 ++--
llvm/test/CodeGen/RISCV/rvv/vmfeq.ll | 48 +-
llvm/test/CodeGen/RISCV/rvv/vmfge.ll | 48 +-
llvm/test/CodeGen/RISCV/rvv/vmfgt.ll | 48 +-
llvm/test/CodeGen/RISCV/rvv/vmfle.ll | 48 +-
llvm/test/CodeGen/RISCV/rvv/vmflt.ll | 48 +-
llvm/test/CodeGen/RISCV/rvv/vmfne.ll | 48 +-
llvm/test/CodeGen/RISCV/rvv/vmsbf.ll | 14 +-
llvm/test/CodeGen/RISCV/rvv/vmseq.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsge.ll | 110 +-
llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsgt.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsif.ll | 14 +-
llvm/test/CodeGen/RISCV/rvv/vmsle.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsleu.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmslt.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsltu.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsne.ll | 108 +-
llvm/test/CodeGen/RISCV/rvv/vmsof.ll | 14 +-
.../CodeGen/RISCV/rvv/vmv.v.v-peephole.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/vp-select.ll | 2 +-
.../RISCV/rvv/vp-splice-mask-fixed-vectors.ll | 24 +-
.../RISCV/rvv/vp-splice-mask-vectors.ll | 42 +-
llvm/test/CodeGen/RISCV/rvv/vpload.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vpstore.ll | 2 +-
.../CodeGen/RISCV/rvv/vreductions-mask-vp.ll | 74 +-
llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vselect-int.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll | 10 +-
.../CodeGen/RISCV/rvv/vsetvli-insert-O0.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll | 2 +-
104 files changed, 1951 insertions(+), 2805 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 6a84bc63a55df8..7d74c4f61f4fbd 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -1237,7 +1237,7 @@ void RISCVInsertVSETVLI::transferBefore(VSETVLIInfo &Info,
// Use an arbitrary but valid AVL and VTYPE so vill will be cleared. It may
// be coalesced into another vsetvli since we won't demand any fields.
VSETVLIInfo NewInfo; // Need a new VSETVLIInfo to clear SEWLMULRatioOnly
- NewInfo.setAVLImm(0);
+ NewInfo.setAVLImm(1);
NewInfo.setVTYPE(RISCVII::VLMUL::LMUL_1, 8, true, true);
Info = NewInfo;
return;
diff --git a/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll b/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll
index 45bc1222c0677c..64957ed6e4ba61 100644
--- a/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll
+++ b/llvm/test/CodeGen/RISCV/inline-asm-v-constraint.ll
@@ -45,7 +45,7 @@ define <vscale x 1 x i8> @constraint_vd(<vscale x 1 x i8> %0, <vscale x 1 x i8>
define <vscale x 1 x i1> @constraint_vm(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1) nounwind {
; RV32I-LABEL: constraint_vm:
; RV32I: # %bb.0:
-; RV32I-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32I-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32I-NEXT: vmv1r.v v9, v0
; RV32I-NEXT: vmv1r.v v0, v8
; RV32I-NEXT: #APP
@@ -55,7 +55,7 @@ define <vscale x 1 x i1> @constraint_vm(<vscale x 1 x i1> %0, <vscale x 1 x i1>
;
; RV64I-LABEL: constraint_vm:
; RV64I: # %bb.0:
-; RV64I-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64I-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64I-NEXT: vmv1r.v v9, v0
; RV64I-NEXT: vmv1r.v v0, v8
; RV64I-NEXT: #APP
diff --git a/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll b/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
index d824426727fd9a..15f6ca600cb37e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
@@ -336,7 +336,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_i32(<vsca
; RV32-NEXT: add a1, a3, a1
; RV32-NEXT: li a3, 2
; RV32-NEXT: vs8r.v v16, (a1)
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v8, v0
; RV32-NEXT: vmv8r.v v16, v24
; RV32-NEXT: call ext2
@@ -375,7 +375,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_i32(<vsca
; RV64-NEXT: add a1, a3, a1
; RV64-NEXT: li a3, 2
; RV64-NEXT: vs8r.v v16, (a1)
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv8r.v v8, v0
; RV64-NEXT: vmv8r.v v16, v24
; RV64-NEXT: call ext2
@@ -453,7 +453,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_nxv32i32_
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 128
; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v16, v0
; RV32-NEXT: call ext3
; RV32-NEXT: addi sp, s0, -144
@@ -526,7 +526,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_nxv32i32_
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 128
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv8r.v v16, v0
; RV64-NEXT: call ext3
; RV64-NEXT: addi sp, s0, -144
diff --git a/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll b/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
index 8d56a76dc8eb80..2e181e0914c886 100644
--- a/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
@@ -103,7 +103,7 @@ define target("riscv.vector.tuple", <vscale x 16 x i8>, 2) @caller_tuple_return(
; RV32-NEXT: sw ra, 12(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
; RV32-NEXT: call callee_tuple_return
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v6, v8
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: vmv2r.v v10, v6
@@ -120,7 +120,7 @@ define target("riscv.vector.tuple", <vscale x 16 x i8>, 2) @caller_tuple_return(
; RV64-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64-NEXT: .cfi_offset ra, -8
; RV64-NEXT: call callee_tuple_return
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v6, v8
; RV64-NEXT: vmv2r.v v8, v10
; RV64-NEXT: vmv2r.v v10, v6
@@ -146,7 +146,7 @@ define void @caller_tuple_argument(target("riscv.vector.tuple", <vscale x 16 x i
; RV32-NEXT: .cfi_def_cfa_offset 16
; RV32-NEXT: sw ra, 12(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v6, v8
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: vmv2r.v v10, v6
@@ -163,7 +163,7 @@ define void @caller_tuple_argument(target("riscv.vector.tuple", <vscale x 16 x i
; RV64-NEXT: .cfi_def_cfa_offset 16
; RV64-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64-NEXT: .cfi_offset ra, -8
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v6, v8
; RV64-NEXT: vmv2r.v v8, v10
; RV64-NEXT: vmv2r.v v10, v6
diff --git a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
index 5bad25aab7aa29..48103170cba9a2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
@@ -649,7 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.ceil.nxv8f16(<vscale x 8 x half>, <vscale x
define <vscale x 8 x half> @vp_ceil_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -736,7 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.ceil.nxv16f16(<vscale x 16 x half>, <vscal
define <vscale x 16 x half> @vp_ceil_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -823,7 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.ceil.nxv32f16(<vscale x 32 x half>, <vscal
define <vscale x 32 x half> @vp_ceil_vv_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -1071,7 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.ceil.nxv4f32(<vscale x 4 x float>, <vscale
define <vscale x 4 x float> @vp_ceil_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1116,7 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.ceil.nxv8f32(<vscale x 8 x float>, <vscale
define <vscale x 8 x float> @vp_ceil_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1161,7 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.ceil.nxv16f32(<vscale x 16 x float>, <vsc
define <vscale x 16 x float> @vp_ceil_vv_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1248,7 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.ceil.nxv2f64(<vscale x 2 x double>, <vsca
define <vscale x 2 x double> @vp_ceil_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1293,7 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.ceil.nxv4f64(<vscale x 4 x double>, <vsca
define <vscale x 4 x double> @vp_ceil_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1338,7 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.ceil.nxv7f64(<vscale x 7 x double>, <vsca
define <vscale x 7 x double> @vp_ceil_vv_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1383,7 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.ceil.nxv8f64(<vscale x 8 x double>, <vsca
define <vscale x 8 x double> @vp_ceil_vv_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/expandload.ll b/llvm/test/CodeGen/RISCV/rvv/expandload.ll
index 13a7e444b77f3b..a35cf14203f785 100644
--- a/llvm/test/CodeGen/RISCV/rvv/expandload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/expandload.ll
@@ -1786,10 +1786,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a3, .LBB61_66
; CHECK-RV32-NEXT: .LBB61_65: # %cond.load241
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 62
; CHECK-RV32-NEXT: li a4, 61
@@ -1940,10 +1938,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a2, .LBB61_100
; CHECK-RV32-NEXT: .LBB61_99: # %cond.load369
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 94
; CHECK-RV32-NEXT: li a4, 93
@@ -2094,10 +2090,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a3, .LBB61_134
; CHECK-RV32-NEXT: .LBB61_133: # %cond.load497
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 126
; CHECK-RV32-NEXT: li a4, 125
@@ -2248,10 +2242,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a2, .LBB61_168
; CHECK-RV32-NEXT: .LBB61_167: # %cond.load625
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 158
; CHECK-RV32-NEXT: li a4, 157
@@ -2402,10 +2394,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a3, .LBB61_202
; CHECK-RV32-NEXT: .LBB61_201: # %cond.load753
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 190
; CHECK-RV32-NEXT: li a4, 189
@@ -2556,10 +2546,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a2, .LBB61_236
; CHECK-RV32-NEXT: .LBB61_235: # %cond.load881
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 222
; CHECK-RV32-NEXT: li a4, 221
@@ -2710,10 +2698,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: bgez a3, .LBB61_270
; CHECK-RV32-NEXT: .LBB61_269: # %cond.load1009
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 254
; CHECK-RV32-NEXT: li a4, 253
@@ -4253,10 +4239,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_36
; CHECK-RV32-NEXT: .LBB61_573: # %cond.load125
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 33
; CHECK-RV32-NEXT: li a4, 32
@@ -4270,10 +4254,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_37
; CHECK-RV32-NEXT: .LBB61_574: # %cond.load129
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 34
; CHECK-RV32-NEXT: li a4, 33
@@ -4287,10 +4269,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_38
; CHECK-RV32-NEXT: .LBB61_575: # %cond.load133
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 35
; CHECK-RV32-NEXT: li a4, 34
@@ -4304,10 +4284,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_39
; CHECK-RV32-NEXT: .LBB61_576: # %cond.load137
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 36
; CHECK-RV32-NEXT: li a4, 35
@@ -4321,10 +4299,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_40
; CHECK-RV32-NEXT: .LBB61_577: # %cond.load141
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 37
; CHECK-RV32-NEXT: li a4, 36
@@ -4338,10 +4314,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_41
; CHECK-RV32-NEXT: .LBB61_578: # %cond.load145
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 38
; CHECK-RV32-NEXT: li a4, 37
@@ -4355,10 +4329,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_42
; CHECK-RV32-NEXT: .LBB61_579: # %cond.load149
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 39
; CHECK-RV32-NEXT: li a4, 38
@@ -4372,10 +4344,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_43
; CHECK-RV32-NEXT: .LBB61_580: # %cond.load153
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 40
; CHECK-RV32-NEXT: li a4, 39
@@ -4389,10 +4359,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_44
; CHECK-RV32-NEXT: .LBB61_581: # %cond.load157
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 41
; CHECK-RV32-NEXT: li a4, 40
@@ -4406,10 +4374,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_45
; CHECK-RV32-NEXT: .LBB61_582: # %cond.load161
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 42
; CHECK-RV32-NEXT: li a4, 41
@@ -4423,10 +4389,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_46
; CHECK-RV32-NEXT: .LBB61_583: # %cond.load165
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 43
; CHECK-RV32-NEXT: li a4, 42
@@ -4440,10 +4404,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_47
; CHECK-RV32-NEXT: .LBB61_584: # %cond.load169
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 44
; CHECK-RV32-NEXT: li a4, 43
@@ -4457,10 +4419,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_48
; CHECK-RV32-NEXT: .LBB61_585: # %cond.load173
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 45
; CHECK-RV32-NEXT: li a4, 44
@@ -4474,10 +4434,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_49
; CHECK-RV32-NEXT: .LBB61_586: # %cond.load177
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 46
; CHECK-RV32-NEXT: li a4, 45
@@ -4491,10 +4449,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_50
; CHECK-RV32-NEXT: .LBB61_587: # %cond.load181
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 47
; CHECK-RV32-NEXT: li a4, 46
@@ -4508,10 +4464,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_51
; CHECK-RV32-NEXT: .LBB61_588: # %cond.load185
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 48
; CHECK-RV32-NEXT: li a4, 47
@@ -4525,10 +4479,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_52
; CHECK-RV32-NEXT: .LBB61_589: # %cond.load189
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 49
; CHECK-RV32-NEXT: li a4, 48
@@ -4542,10 +4494,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_53
; CHECK-RV32-NEXT: .LBB61_590: # %cond.load193
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 50
; CHECK-RV32-NEXT: li a4, 49
@@ -4559,10 +4509,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_54
; CHECK-RV32-NEXT: .LBB61_591: # %cond.load197
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 51
; CHECK-RV32-NEXT: li a4, 50
@@ -4576,10 +4524,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_55
; CHECK-RV32-NEXT: .LBB61_592: # %cond.load201
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 52
; CHECK-RV32-NEXT: li a4, 51
@@ -4593,10 +4539,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_56
; CHECK-RV32-NEXT: .LBB61_593: # %cond.load205
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 53
; CHECK-RV32-NEXT: li a4, 52
@@ -4610,10 +4554,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_57
; CHECK-RV32-NEXT: .LBB61_594: # %cond.load209
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 54
; CHECK-RV32-NEXT: li a4, 53
@@ -4627,10 +4569,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_58
; CHECK-RV32-NEXT: .LBB61_595: # %cond.load213
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 55
; CHECK-RV32-NEXT: li a4, 54
@@ -4644,10 +4584,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_59
; CHECK-RV32-NEXT: .LBB61_596: # %cond.load217
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 56
; CHECK-RV32-NEXT: li a4, 55
@@ -4661,10 +4599,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_60
; CHECK-RV32-NEXT: .LBB61_597: # %cond.load221
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 57
; CHECK-RV32-NEXT: li a4, 56
@@ -4678,10 +4614,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_61
; CHECK-RV32-NEXT: .LBB61_598: # %cond.load225
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 58
; CHECK-RV32-NEXT: li a4, 57
@@ -4695,10 +4629,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_62
; CHECK-RV32-NEXT: .LBB61_599: # %cond.load229
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 59
; CHECK-RV32-NEXT: li a4, 58
@@ -4712,10 +4644,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_63
; CHECK-RV32-NEXT: .LBB61_600: # %cond.load233
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 60
; CHECK-RV32-NEXT: li a4, 59
@@ -4729,10 +4659,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_64
; CHECK-RV32-NEXT: .LBB61_601: # %cond.load237
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v9, a3
; CHECK-RV32-NEXT: li a3, 61
; CHECK-RV32-NEXT: li a4, 60
@@ -4762,10 +4690,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_70
; CHECK-RV32-NEXT: .LBB61_603: # %cond.load253
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 65
; CHECK-RV32-NEXT: li a4, 64
@@ -4779,10 +4705,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_71
; CHECK-RV32-NEXT: .LBB61_604: # %cond.load257
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 66
; CHECK-RV32-NEXT: li a4, 65
@@ -4796,10 +4720,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_72
; CHECK-RV32-NEXT: .LBB61_605: # %cond.load261
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 67
; CHECK-RV32-NEXT: li a4, 66
@@ -4813,10 +4735,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_73
; CHECK-RV32-NEXT: .LBB61_606: # %cond.load265
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 68
; CHECK-RV32-NEXT: li a4, 67
@@ -4830,10 +4750,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_74
; CHECK-RV32-NEXT: .LBB61_607: # %cond.load269
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 69
; CHECK-RV32-NEXT: li a4, 68
@@ -4847,10 +4765,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_75
; CHECK-RV32-NEXT: .LBB61_608: # %cond.load273
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 70
; CHECK-RV32-NEXT: li a4, 69
@@ -4864,10 +4780,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_76
; CHECK-RV32-NEXT: .LBB61_609: # %cond.load277
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 71
; CHECK-RV32-NEXT: li a4, 70
@@ -4881,10 +4795,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_77
; CHECK-RV32-NEXT: .LBB61_610: # %cond.load281
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 72
; CHECK-RV32-NEXT: li a4, 71
@@ -4898,10 +4810,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_78
; CHECK-RV32-NEXT: .LBB61_611: # %cond.load285
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 73
; CHECK-RV32-NEXT: li a4, 72
@@ -4915,10 +4825,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_79
; CHECK-RV32-NEXT: .LBB61_612: # %cond.load289
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 74
; CHECK-RV32-NEXT: li a4, 73
@@ -4932,10 +4840,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_80
; CHECK-RV32-NEXT: .LBB61_613: # %cond.load293
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 75
; CHECK-RV32-NEXT: li a4, 74
@@ -4949,10 +4855,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_81
; CHECK-RV32-NEXT: .LBB61_614: # %cond.load297
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 76
; CHECK-RV32-NEXT: li a4, 75
@@ -4966,10 +4870,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_82
; CHECK-RV32-NEXT: .LBB61_615: # %cond.load301
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 77
; CHECK-RV32-NEXT: li a4, 76
@@ -4983,10 +4885,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_83
; CHECK-RV32-NEXT: .LBB61_616: # %cond.load305
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 78
; CHECK-RV32-NEXT: li a4, 77
@@ -5000,10 +4900,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_84
; CHECK-RV32-NEXT: .LBB61_617: # %cond.load309
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 79
; CHECK-RV32-NEXT: li a4, 78
@@ -5017,10 +4915,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_85
; CHECK-RV32-NEXT: .LBB61_618: # %cond.load313
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 80
; CHECK-RV32-NEXT: li a4, 79
@@ -5034,10 +4930,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_86
; CHECK-RV32-NEXT: .LBB61_619: # %cond.load317
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 81
; CHECK-RV32-NEXT: li a4, 80
@@ -5051,10 +4945,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_87
; CHECK-RV32-NEXT: .LBB61_620: # %cond.load321
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 82
; CHECK-RV32-NEXT: li a4, 81
@@ -5068,10 +4960,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_88
; CHECK-RV32-NEXT: .LBB61_621: # %cond.load325
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 83
; CHECK-RV32-NEXT: li a4, 82
@@ -5085,10 +4975,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_89
; CHECK-RV32-NEXT: .LBB61_622: # %cond.load329
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 84
; CHECK-RV32-NEXT: li a4, 83
@@ -5102,10 +4990,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_90
; CHECK-RV32-NEXT: .LBB61_623: # %cond.load333
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 85
; CHECK-RV32-NEXT: li a4, 84
@@ -5119,10 +5005,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_91
; CHECK-RV32-NEXT: .LBB61_624: # %cond.load337
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 86
; CHECK-RV32-NEXT: li a4, 85
@@ -5136,10 +5020,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_92
; CHECK-RV32-NEXT: .LBB61_625: # %cond.load341
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 87
; CHECK-RV32-NEXT: li a4, 86
@@ -5153,10 +5035,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_93
; CHECK-RV32-NEXT: .LBB61_626: # %cond.load345
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 88
; CHECK-RV32-NEXT: li a4, 87
@@ -5170,10 +5050,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_94
; CHECK-RV32-NEXT: .LBB61_627: # %cond.load349
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 89
; CHECK-RV32-NEXT: li a4, 88
@@ -5187,10 +5065,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_95
; CHECK-RV32-NEXT: .LBB61_628: # %cond.load353
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 90
; CHECK-RV32-NEXT: li a4, 89
@@ -5204,10 +5080,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_96
; CHECK-RV32-NEXT: .LBB61_629: # %cond.load357
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 91
; CHECK-RV32-NEXT: li a4, 90
@@ -5221,10 +5095,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_97
; CHECK-RV32-NEXT: .LBB61_630: # %cond.load361
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 92
; CHECK-RV32-NEXT: li a4, 91
@@ -5238,10 +5110,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_98
; CHECK-RV32-NEXT: .LBB61_631: # %cond.load365
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a2
; CHECK-RV32-NEXT: li a2, 93
; CHECK-RV32-NEXT: li a4, 92
@@ -5271,10 +5141,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_104
; CHECK-RV32-NEXT: .LBB61_633: # %cond.load381
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 97
; CHECK-RV32-NEXT: li a4, 96
@@ -5288,10 +5156,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_105
; CHECK-RV32-NEXT: .LBB61_634: # %cond.load385
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 98
; CHECK-RV32-NEXT: li a4, 97
@@ -5305,10 +5171,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_106
; CHECK-RV32-NEXT: .LBB61_635: # %cond.load389
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 99
; CHECK-RV32-NEXT: li a4, 98
@@ -5322,10 +5186,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_107
; CHECK-RV32-NEXT: .LBB61_636: # %cond.load393
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 100
; CHECK-RV32-NEXT: li a4, 99
@@ -5339,10 +5201,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_108
; CHECK-RV32-NEXT: .LBB61_637: # %cond.load397
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 101
; CHECK-RV32-NEXT: li a4, 100
@@ -5356,10 +5216,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_109
; CHECK-RV32-NEXT: .LBB61_638: # %cond.load401
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 102
; CHECK-RV32-NEXT: li a4, 101
@@ -5373,10 +5231,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_110
; CHECK-RV32-NEXT: .LBB61_639: # %cond.load405
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 103
; CHECK-RV32-NEXT: li a4, 102
@@ -5390,10 +5246,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_111
; CHECK-RV32-NEXT: .LBB61_640: # %cond.load409
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 104
; CHECK-RV32-NEXT: li a4, 103
@@ -5407,10 +5261,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_112
; CHECK-RV32-NEXT: .LBB61_641: # %cond.load413
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 105
; CHECK-RV32-NEXT: li a4, 104
@@ -5424,10 +5276,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_113
; CHECK-RV32-NEXT: .LBB61_642: # %cond.load417
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 106
; CHECK-RV32-NEXT: li a4, 105
@@ -5441,10 +5291,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_114
; CHECK-RV32-NEXT: .LBB61_643: # %cond.load421
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 107
; CHECK-RV32-NEXT: li a4, 106
@@ -5458,10 +5306,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_115
; CHECK-RV32-NEXT: .LBB61_644: # %cond.load425
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 108
; CHECK-RV32-NEXT: li a4, 107
@@ -5475,10 +5321,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_116
; CHECK-RV32-NEXT: .LBB61_645: # %cond.load429
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 109
; CHECK-RV32-NEXT: li a4, 108
@@ -5492,10 +5336,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_117
; CHECK-RV32-NEXT: .LBB61_646: # %cond.load433
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 110
; CHECK-RV32-NEXT: li a4, 109
@@ -5509,10 +5351,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_118
; CHECK-RV32-NEXT: .LBB61_647: # %cond.load437
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 111
; CHECK-RV32-NEXT: li a4, 110
@@ -5526,10 +5366,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_119
; CHECK-RV32-NEXT: .LBB61_648: # %cond.load441
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 112
; CHECK-RV32-NEXT: li a4, 111
@@ -5543,10 +5381,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_120
; CHECK-RV32-NEXT: .LBB61_649: # %cond.load445
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 113
; CHECK-RV32-NEXT: li a4, 112
@@ -5560,10 +5396,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_121
; CHECK-RV32-NEXT: .LBB61_650: # %cond.load449
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 114
; CHECK-RV32-NEXT: li a4, 113
@@ -5577,10 +5411,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_122
; CHECK-RV32-NEXT: .LBB61_651: # %cond.load453
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 115
; CHECK-RV32-NEXT: li a4, 114
@@ -5594,10 +5426,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_123
; CHECK-RV32-NEXT: .LBB61_652: # %cond.load457
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 116
; CHECK-RV32-NEXT: li a4, 115
@@ -5611,10 +5441,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_124
; CHECK-RV32-NEXT: .LBB61_653: # %cond.load461
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 117
; CHECK-RV32-NEXT: li a4, 116
@@ -5628,10 +5456,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_125
; CHECK-RV32-NEXT: .LBB61_654: # %cond.load465
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 118
; CHECK-RV32-NEXT: li a4, 117
@@ -5645,10 +5471,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_126
; CHECK-RV32-NEXT: .LBB61_655: # %cond.load469
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 119
; CHECK-RV32-NEXT: li a4, 118
@@ -5662,10 +5486,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_127
; CHECK-RV32-NEXT: .LBB61_656: # %cond.load473
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 120
; CHECK-RV32-NEXT: li a4, 119
@@ -5679,10 +5501,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_128
; CHECK-RV32-NEXT: .LBB61_657: # %cond.load477
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 121
; CHECK-RV32-NEXT: li a4, 120
@@ -5696,10 +5516,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_129
; CHECK-RV32-NEXT: .LBB61_658: # %cond.load481
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 122
; CHECK-RV32-NEXT: li a4, 121
@@ -5713,10 +5531,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_130
; CHECK-RV32-NEXT: .LBB61_659: # %cond.load485
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 123
; CHECK-RV32-NEXT: li a4, 122
@@ -5730,10 +5546,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_131
; CHECK-RV32-NEXT: .LBB61_660: # %cond.load489
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 124
; CHECK-RV32-NEXT: li a4, 123
@@ -5747,10 +5561,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_132
; CHECK-RV32-NEXT: .LBB61_661: # %cond.load493
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v10, a3
; CHECK-RV32-NEXT: li a3, 125
; CHECK-RV32-NEXT: li a4, 124
@@ -5780,10 +5592,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_138
; CHECK-RV32-NEXT: .LBB61_663: # %cond.load509
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 129
; CHECK-RV32-NEXT: li a4, 128
@@ -5797,10 +5607,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_139
; CHECK-RV32-NEXT: .LBB61_664: # %cond.load513
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 130
; CHECK-RV32-NEXT: li a4, 129
@@ -5814,10 +5622,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_140
; CHECK-RV32-NEXT: .LBB61_665: # %cond.load517
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 131
; CHECK-RV32-NEXT: li a4, 130
@@ -5831,10 +5637,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_141
; CHECK-RV32-NEXT: .LBB61_666: # %cond.load521
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 132
; CHECK-RV32-NEXT: li a4, 131
@@ -5848,10 +5652,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_142
; CHECK-RV32-NEXT: .LBB61_667: # %cond.load525
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 133
; CHECK-RV32-NEXT: li a4, 132
@@ -5865,10 +5667,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_143
; CHECK-RV32-NEXT: .LBB61_668: # %cond.load529
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 134
; CHECK-RV32-NEXT: li a4, 133
@@ -5882,10 +5682,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_144
; CHECK-RV32-NEXT: .LBB61_669: # %cond.load533
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 135
; CHECK-RV32-NEXT: li a4, 134
@@ -5899,10 +5697,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_145
; CHECK-RV32-NEXT: .LBB61_670: # %cond.load537
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 136
; CHECK-RV32-NEXT: li a4, 135
@@ -5916,10 +5712,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_146
; CHECK-RV32-NEXT: .LBB61_671: # %cond.load541
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 137
; CHECK-RV32-NEXT: li a4, 136
@@ -5933,10 +5727,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_147
; CHECK-RV32-NEXT: .LBB61_672: # %cond.load545
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 138
; CHECK-RV32-NEXT: li a4, 137
@@ -5950,10 +5742,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_148
; CHECK-RV32-NEXT: .LBB61_673: # %cond.load549
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 139
; CHECK-RV32-NEXT: li a4, 138
@@ -5967,10 +5757,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_149
; CHECK-RV32-NEXT: .LBB61_674: # %cond.load553
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 140
; CHECK-RV32-NEXT: li a4, 139
@@ -5984,10 +5772,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_150
; CHECK-RV32-NEXT: .LBB61_675: # %cond.load557
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 141
; CHECK-RV32-NEXT: li a4, 140
@@ -6001,10 +5787,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_151
; CHECK-RV32-NEXT: .LBB61_676: # %cond.load561
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 142
; CHECK-RV32-NEXT: li a4, 141
@@ -6018,10 +5802,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_152
; CHECK-RV32-NEXT: .LBB61_677: # %cond.load565
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 143
; CHECK-RV32-NEXT: li a4, 142
@@ -6035,10 +5817,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_153
; CHECK-RV32-NEXT: .LBB61_678: # %cond.load569
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 144
; CHECK-RV32-NEXT: li a4, 143
@@ -6052,10 +5832,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_154
; CHECK-RV32-NEXT: .LBB61_679: # %cond.load573
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 145
; CHECK-RV32-NEXT: li a4, 144
@@ -6069,10 +5847,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_155
; CHECK-RV32-NEXT: .LBB61_680: # %cond.load577
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 146
; CHECK-RV32-NEXT: li a4, 145
@@ -6086,10 +5862,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_156
; CHECK-RV32-NEXT: .LBB61_681: # %cond.load581
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 147
; CHECK-RV32-NEXT: li a4, 146
@@ -6103,10 +5877,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_157
; CHECK-RV32-NEXT: .LBB61_682: # %cond.load585
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 148
; CHECK-RV32-NEXT: li a4, 147
@@ -6120,10 +5892,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_158
; CHECK-RV32-NEXT: .LBB61_683: # %cond.load589
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 149
; CHECK-RV32-NEXT: li a4, 148
@@ -6137,10 +5907,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_159
; CHECK-RV32-NEXT: .LBB61_684: # %cond.load593
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 150
; CHECK-RV32-NEXT: li a4, 149
@@ -6154,10 +5922,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_160
; CHECK-RV32-NEXT: .LBB61_685: # %cond.load597
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 151
; CHECK-RV32-NEXT: li a4, 150
@@ -6171,10 +5937,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_161
; CHECK-RV32-NEXT: .LBB61_686: # %cond.load601
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 152
; CHECK-RV32-NEXT: li a4, 151
@@ -6188,10 +5952,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_162
; CHECK-RV32-NEXT: .LBB61_687: # %cond.load605
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 153
; CHECK-RV32-NEXT: li a4, 152
@@ -6205,10 +5967,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_163
; CHECK-RV32-NEXT: .LBB61_688: # %cond.load609
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 154
; CHECK-RV32-NEXT: li a4, 153
@@ -6222,10 +5982,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_164
; CHECK-RV32-NEXT: .LBB61_689: # %cond.load613
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 155
; CHECK-RV32-NEXT: li a4, 154
@@ -6239,10 +5997,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_165
; CHECK-RV32-NEXT: .LBB61_690: # %cond.load617
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 156
; CHECK-RV32-NEXT: li a4, 155
@@ -6256,10 +6012,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_166
; CHECK-RV32-NEXT: .LBB61_691: # %cond.load621
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 157
; CHECK-RV32-NEXT: li a4, 156
@@ -6289,10 +6043,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_172
; CHECK-RV32-NEXT: .LBB61_693: # %cond.load637
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 161
; CHECK-RV32-NEXT: li a4, 160
@@ -6306,10 +6058,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_173
; CHECK-RV32-NEXT: .LBB61_694: # %cond.load641
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 162
; CHECK-RV32-NEXT: li a4, 161
@@ -6323,10 +6073,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_174
; CHECK-RV32-NEXT: .LBB61_695: # %cond.load645
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 163
; CHECK-RV32-NEXT: li a4, 162
@@ -6340,10 +6088,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_175
; CHECK-RV32-NEXT: .LBB61_696: # %cond.load649
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 164
; CHECK-RV32-NEXT: li a4, 163
@@ -6357,10 +6103,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_176
; CHECK-RV32-NEXT: .LBB61_697: # %cond.load653
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 165
; CHECK-RV32-NEXT: li a4, 164
@@ -6374,10 +6118,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_177
; CHECK-RV32-NEXT: .LBB61_698: # %cond.load657
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 166
; CHECK-RV32-NEXT: li a4, 165
@@ -6391,10 +6133,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_178
; CHECK-RV32-NEXT: .LBB61_699: # %cond.load661
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 167
; CHECK-RV32-NEXT: li a4, 166
@@ -6408,10 +6148,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_179
; CHECK-RV32-NEXT: .LBB61_700: # %cond.load665
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 168
; CHECK-RV32-NEXT: li a4, 167
@@ -6425,10 +6163,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_180
; CHECK-RV32-NEXT: .LBB61_701: # %cond.load669
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 169
; CHECK-RV32-NEXT: li a4, 168
@@ -6442,10 +6178,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_181
; CHECK-RV32-NEXT: .LBB61_702: # %cond.load673
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 170
; CHECK-RV32-NEXT: li a4, 169
@@ -6459,10 +6193,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_182
; CHECK-RV32-NEXT: .LBB61_703: # %cond.load677
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 171
; CHECK-RV32-NEXT: li a4, 170
@@ -6476,10 +6208,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_183
; CHECK-RV32-NEXT: .LBB61_704: # %cond.load681
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 172
; CHECK-RV32-NEXT: li a4, 171
@@ -6493,10 +6223,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_184
; CHECK-RV32-NEXT: .LBB61_705: # %cond.load685
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 173
; CHECK-RV32-NEXT: li a4, 172
@@ -6510,10 +6238,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_185
; CHECK-RV32-NEXT: .LBB61_706: # %cond.load689
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 174
; CHECK-RV32-NEXT: li a4, 173
@@ -6527,10 +6253,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_186
; CHECK-RV32-NEXT: .LBB61_707: # %cond.load693
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 175
; CHECK-RV32-NEXT: li a4, 174
@@ -6544,10 +6268,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_187
; CHECK-RV32-NEXT: .LBB61_708: # %cond.load697
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 176
; CHECK-RV32-NEXT: li a4, 175
@@ -6561,10 +6283,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_188
; CHECK-RV32-NEXT: .LBB61_709: # %cond.load701
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 177
; CHECK-RV32-NEXT: li a4, 176
@@ -6578,10 +6298,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_189
; CHECK-RV32-NEXT: .LBB61_710: # %cond.load705
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 178
; CHECK-RV32-NEXT: li a4, 177
@@ -6595,10 +6313,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_190
; CHECK-RV32-NEXT: .LBB61_711: # %cond.load709
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 179
; CHECK-RV32-NEXT: li a4, 178
@@ -6612,10 +6328,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_191
; CHECK-RV32-NEXT: .LBB61_712: # %cond.load713
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 180
; CHECK-RV32-NEXT: li a4, 179
@@ -6629,10 +6343,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_192
; CHECK-RV32-NEXT: .LBB61_713: # %cond.load717
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 181
; CHECK-RV32-NEXT: li a4, 180
@@ -6646,10 +6358,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_193
; CHECK-RV32-NEXT: .LBB61_714: # %cond.load721
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 182
; CHECK-RV32-NEXT: li a4, 181
@@ -6663,10 +6373,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_194
; CHECK-RV32-NEXT: .LBB61_715: # %cond.load725
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 183
; CHECK-RV32-NEXT: li a4, 182
@@ -6680,10 +6388,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_195
; CHECK-RV32-NEXT: .LBB61_716: # %cond.load729
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 184
; CHECK-RV32-NEXT: li a4, 183
@@ -6697,10 +6403,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_196
; CHECK-RV32-NEXT: .LBB61_717: # %cond.load733
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 185
; CHECK-RV32-NEXT: li a4, 184
@@ -6714,10 +6418,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_197
; CHECK-RV32-NEXT: .LBB61_718: # %cond.load737
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 186
; CHECK-RV32-NEXT: li a4, 185
@@ -6731,10 +6433,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_198
; CHECK-RV32-NEXT: .LBB61_719: # %cond.load741
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 187
; CHECK-RV32-NEXT: li a4, 186
@@ -6748,10 +6448,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_199
; CHECK-RV32-NEXT: .LBB61_720: # %cond.load745
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 188
; CHECK-RV32-NEXT: li a4, 187
@@ -6765,10 +6463,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_200
; CHECK-RV32-NEXT: .LBB61_721: # %cond.load749
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 189
; CHECK-RV32-NEXT: li a4, 188
@@ -6798,10 +6494,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_206
; CHECK-RV32-NEXT: .LBB61_723: # %cond.load765
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 193
; CHECK-RV32-NEXT: li a4, 192
@@ -6815,10 +6509,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_207
; CHECK-RV32-NEXT: .LBB61_724: # %cond.load769
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 194
; CHECK-RV32-NEXT: li a4, 193
@@ -6832,10 +6524,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_208
; CHECK-RV32-NEXT: .LBB61_725: # %cond.load773
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 195
; CHECK-RV32-NEXT: li a4, 194
@@ -6849,10 +6539,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_209
; CHECK-RV32-NEXT: .LBB61_726: # %cond.load777
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 196
; CHECK-RV32-NEXT: li a4, 195
@@ -6866,10 +6554,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_210
; CHECK-RV32-NEXT: .LBB61_727: # %cond.load781
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 197
; CHECK-RV32-NEXT: li a4, 196
@@ -6883,10 +6569,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_211
; CHECK-RV32-NEXT: .LBB61_728: # %cond.load785
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 198
; CHECK-RV32-NEXT: li a4, 197
@@ -6900,10 +6584,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_212
; CHECK-RV32-NEXT: .LBB61_729: # %cond.load789
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 199
; CHECK-RV32-NEXT: li a4, 198
@@ -6917,10 +6599,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_213
; CHECK-RV32-NEXT: .LBB61_730: # %cond.load793
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 200
; CHECK-RV32-NEXT: li a4, 199
@@ -6934,10 +6614,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_214
; CHECK-RV32-NEXT: .LBB61_731: # %cond.load797
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 201
; CHECK-RV32-NEXT: li a4, 200
@@ -6951,10 +6629,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_215
; CHECK-RV32-NEXT: .LBB61_732: # %cond.load801
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 202
; CHECK-RV32-NEXT: li a4, 201
@@ -6968,10 +6644,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_216
; CHECK-RV32-NEXT: .LBB61_733: # %cond.load805
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 203
; CHECK-RV32-NEXT: li a4, 202
@@ -6985,10 +6659,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_217
; CHECK-RV32-NEXT: .LBB61_734: # %cond.load809
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 204
; CHECK-RV32-NEXT: li a4, 203
@@ -7002,10 +6674,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_218
; CHECK-RV32-NEXT: .LBB61_735: # %cond.load813
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 205
; CHECK-RV32-NEXT: li a4, 204
@@ -7019,10 +6689,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_219
; CHECK-RV32-NEXT: .LBB61_736: # %cond.load817
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 206
; CHECK-RV32-NEXT: li a4, 205
@@ -7036,10 +6704,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_220
; CHECK-RV32-NEXT: .LBB61_737: # %cond.load821
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 207
; CHECK-RV32-NEXT: li a4, 206
@@ -7053,10 +6719,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_221
; CHECK-RV32-NEXT: .LBB61_738: # %cond.load825
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 208
; CHECK-RV32-NEXT: li a4, 207
@@ -7070,10 +6734,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_222
; CHECK-RV32-NEXT: .LBB61_739: # %cond.load829
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 209
; CHECK-RV32-NEXT: li a4, 208
@@ -7087,10 +6749,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_223
; CHECK-RV32-NEXT: .LBB61_740: # %cond.load833
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 210
; CHECK-RV32-NEXT: li a4, 209
@@ -7104,10 +6764,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_224
; CHECK-RV32-NEXT: .LBB61_741: # %cond.load837
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 211
; CHECK-RV32-NEXT: li a4, 210
@@ -7121,10 +6779,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_225
; CHECK-RV32-NEXT: .LBB61_742: # %cond.load841
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 212
; CHECK-RV32-NEXT: li a4, 211
@@ -7138,10 +6794,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_226
; CHECK-RV32-NEXT: .LBB61_743: # %cond.load845
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 213
; CHECK-RV32-NEXT: li a4, 212
@@ -7155,10 +6809,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_227
; CHECK-RV32-NEXT: .LBB61_744: # %cond.load849
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 214
; CHECK-RV32-NEXT: li a4, 213
@@ -7172,10 +6824,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_228
; CHECK-RV32-NEXT: .LBB61_745: # %cond.load853
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 215
; CHECK-RV32-NEXT: li a4, 214
@@ -7189,10 +6839,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_229
; CHECK-RV32-NEXT: .LBB61_746: # %cond.load857
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 216
; CHECK-RV32-NEXT: li a4, 215
@@ -7206,10 +6854,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_230
; CHECK-RV32-NEXT: .LBB61_747: # %cond.load861
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 217
; CHECK-RV32-NEXT: li a4, 216
@@ -7223,10 +6869,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_231
; CHECK-RV32-NEXT: .LBB61_748: # %cond.load865
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 218
; CHECK-RV32-NEXT: li a4, 217
@@ -7240,10 +6884,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_232
; CHECK-RV32-NEXT: .LBB61_749: # %cond.load869
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 219
; CHECK-RV32-NEXT: li a4, 218
@@ -7257,10 +6899,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_233
; CHECK-RV32-NEXT: .LBB61_750: # %cond.load873
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 220
; CHECK-RV32-NEXT: li a4, 219
@@ -7274,10 +6914,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_234
; CHECK-RV32-NEXT: .LBB61_751: # %cond.load877
; CHECK-RV32-NEXT: lbu a2, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v24, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a2
; CHECK-RV32-NEXT: li a2, 221
; CHECK-RV32-NEXT: li a4, 220
@@ -7307,10 +6945,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_240
; CHECK-RV32-NEXT: .LBB61_753: # %cond.load893
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 225
; CHECK-RV32-NEXT: li a4, 224
@@ -7324,10 +6960,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_241
; CHECK-RV32-NEXT: .LBB61_754: # %cond.load897
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 226
; CHECK-RV32-NEXT: li a4, 225
@@ -7341,10 +6975,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_242
; CHECK-RV32-NEXT: .LBB61_755: # %cond.load901
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 227
; CHECK-RV32-NEXT: li a4, 226
@@ -7358,10 +6990,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_243
; CHECK-RV32-NEXT: .LBB61_756: # %cond.load905
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 228
; CHECK-RV32-NEXT: li a4, 227
@@ -7375,10 +7005,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_244
; CHECK-RV32-NEXT: .LBB61_757: # %cond.load909
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 229
; CHECK-RV32-NEXT: li a4, 228
@@ -7392,10 +7020,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_245
; CHECK-RV32-NEXT: .LBB61_758: # %cond.load913
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 230
; CHECK-RV32-NEXT: li a4, 229
@@ -7409,10 +7035,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_246
; CHECK-RV32-NEXT: .LBB61_759: # %cond.load917
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 231
; CHECK-RV32-NEXT: li a4, 230
@@ -7426,10 +7050,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_247
; CHECK-RV32-NEXT: .LBB61_760: # %cond.load921
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 232
; CHECK-RV32-NEXT: li a4, 231
@@ -7443,10 +7065,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_248
; CHECK-RV32-NEXT: .LBB61_761: # %cond.load925
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 233
; CHECK-RV32-NEXT: li a4, 232
@@ -7460,10 +7080,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_249
; CHECK-RV32-NEXT: .LBB61_762: # %cond.load929
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 234
; CHECK-RV32-NEXT: li a4, 233
@@ -7477,10 +7095,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_250
; CHECK-RV32-NEXT: .LBB61_763: # %cond.load933
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 235
; CHECK-RV32-NEXT: li a4, 234
@@ -7494,10 +7110,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_251
; CHECK-RV32-NEXT: .LBB61_764: # %cond.load937
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 236
; CHECK-RV32-NEXT: li a4, 235
@@ -7511,10 +7125,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_252
; CHECK-RV32-NEXT: .LBB61_765: # %cond.load941
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 237
; CHECK-RV32-NEXT: li a4, 236
@@ -7528,10 +7140,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_253
; CHECK-RV32-NEXT: .LBB61_766: # %cond.load945
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 238
; CHECK-RV32-NEXT: li a4, 237
@@ -7545,10 +7155,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_254
; CHECK-RV32-NEXT: .LBB61_767: # %cond.load949
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 239
; CHECK-RV32-NEXT: li a4, 238
@@ -7562,10 +7170,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_255
; CHECK-RV32-NEXT: .LBB61_768: # %cond.load953
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 240
; CHECK-RV32-NEXT: li a4, 239
@@ -7579,10 +7185,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_256
; CHECK-RV32-NEXT: .LBB61_769: # %cond.load957
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 241
; CHECK-RV32-NEXT: li a4, 240
@@ -7596,10 +7200,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_257
; CHECK-RV32-NEXT: .LBB61_770: # %cond.load961
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 242
; CHECK-RV32-NEXT: li a4, 241
@@ -7613,10 +7215,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_258
; CHECK-RV32-NEXT: .LBB61_771: # %cond.load965
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 243
; CHECK-RV32-NEXT: li a4, 242
@@ -7630,10 +7230,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_259
; CHECK-RV32-NEXT: .LBB61_772: # %cond.load969
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 244
; CHECK-RV32-NEXT: li a4, 243
@@ -7647,10 +7245,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_260
; CHECK-RV32-NEXT: .LBB61_773: # %cond.load973
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 245
; CHECK-RV32-NEXT: li a4, 244
@@ -7664,10 +7260,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_261
; CHECK-RV32-NEXT: .LBB61_774: # %cond.load977
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 246
; CHECK-RV32-NEXT: li a4, 245
@@ -7681,10 +7275,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_262
; CHECK-RV32-NEXT: .LBB61_775: # %cond.load981
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 247
; CHECK-RV32-NEXT: li a4, 246
@@ -7698,10 +7290,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_263
; CHECK-RV32-NEXT: .LBB61_776: # %cond.load985
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 248
; CHECK-RV32-NEXT: li a4, 247
@@ -7715,10 +7305,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_264
; CHECK-RV32-NEXT: .LBB61_777: # %cond.load989
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 249
; CHECK-RV32-NEXT: li a4, 248
@@ -7732,10 +7320,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_265
; CHECK-RV32-NEXT: .LBB61_778: # %cond.load993
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 250
; CHECK-RV32-NEXT: li a4, 249
@@ -7749,10 +7335,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_266
; CHECK-RV32-NEXT: .LBB61_779: # %cond.load997
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 251
; CHECK-RV32-NEXT: li a4, 250
@@ -7766,10 +7350,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_267
; CHECK-RV32-NEXT: .LBB61_780: # %cond.load1001
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 252
; CHECK-RV32-NEXT: li a4, 251
@@ -7783,10 +7365,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV32-NEXT: j .LBB61_268
; CHECK-RV32-NEXT: .LBB61_781: # %cond.load1005
; CHECK-RV32-NEXT: lbu a3, 0(a0)
-; CHECK-RV32-NEXT: li a4, 512
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv8r.v v16, v8
-; CHECK-RV32-NEXT: vsetvli zero, a4, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv.s.x v12, a3
; CHECK-RV32-NEXT: li a3, 253
; CHECK-RV32-NEXT: li a4, 252
@@ -11207,10 +10787,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: bgez a1, .LBB61_63
; CHECK-RV64-NEXT: .LBB61_62: # %cond.load241
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 62
; CHECK-RV64-NEXT: li a3, 61
@@ -11489,10 +11067,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: bgez a2, .LBB61_129
; CHECK-RV64-NEXT: .LBB61_128: # %cond.load497
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 126
; CHECK-RV64-NEXT: li a3, 125
@@ -11771,10 +11347,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: bgez a1, .LBB61_195
; CHECK-RV64-NEXT: .LBB61_194: # %cond.load753
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 190
; CHECK-RV64-NEXT: li a3, 189
@@ -12053,10 +11627,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: bgez a2, .LBB61_261
; CHECK-RV64-NEXT: .LBB61_260: # %cond.load1009
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 254
; CHECK-RV64-NEXT: li a3, 253
@@ -13541,10 +13113,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_32
; CHECK-RV64-NEXT: .LBB61_558: # %cond.load121
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 32
; CHECK-RV64-NEXT: vsetvli zero, a1, e8, m1, tu, ma
@@ -13557,10 +13127,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_33
; CHECK-RV64-NEXT: .LBB61_559: # %cond.load125
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 33
; CHECK-RV64-NEXT: li a3, 32
@@ -13574,10 +13142,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_34
; CHECK-RV64-NEXT: .LBB61_560: # %cond.load129
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 34
; CHECK-RV64-NEXT: li a3, 33
@@ -13591,10 +13157,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_35
; CHECK-RV64-NEXT: .LBB61_561: # %cond.load133
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 35
; CHECK-RV64-NEXT: li a3, 34
@@ -13608,10 +13172,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_36
; CHECK-RV64-NEXT: .LBB61_562: # %cond.load137
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 36
; CHECK-RV64-NEXT: li a3, 35
@@ -13625,10 +13187,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_37
; CHECK-RV64-NEXT: .LBB61_563: # %cond.load141
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 37
; CHECK-RV64-NEXT: li a3, 36
@@ -13642,10 +13202,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_38
; CHECK-RV64-NEXT: .LBB61_564: # %cond.load145
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 38
; CHECK-RV64-NEXT: li a3, 37
@@ -13659,10 +13217,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_39
; CHECK-RV64-NEXT: .LBB61_565: # %cond.load149
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 39
; CHECK-RV64-NEXT: li a3, 38
@@ -13676,10 +13232,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_40
; CHECK-RV64-NEXT: .LBB61_566: # %cond.load153
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 40
; CHECK-RV64-NEXT: li a3, 39
@@ -13693,10 +13247,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_41
; CHECK-RV64-NEXT: .LBB61_567: # %cond.load157
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 41
; CHECK-RV64-NEXT: li a3, 40
@@ -13710,10 +13262,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_42
; CHECK-RV64-NEXT: .LBB61_568: # %cond.load161
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 42
; CHECK-RV64-NEXT: li a3, 41
@@ -13727,10 +13277,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_43
; CHECK-RV64-NEXT: .LBB61_569: # %cond.load165
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 43
; CHECK-RV64-NEXT: li a3, 42
@@ -13744,10 +13292,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_44
; CHECK-RV64-NEXT: .LBB61_570: # %cond.load169
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 44
; CHECK-RV64-NEXT: li a3, 43
@@ -13761,10 +13307,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_45
; CHECK-RV64-NEXT: .LBB61_571: # %cond.load173
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 45
; CHECK-RV64-NEXT: li a3, 44
@@ -13778,10 +13322,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_46
; CHECK-RV64-NEXT: .LBB61_572: # %cond.load177
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 46
; CHECK-RV64-NEXT: li a3, 45
@@ -13795,10 +13337,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_47
; CHECK-RV64-NEXT: .LBB61_573: # %cond.load181
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 47
; CHECK-RV64-NEXT: li a3, 46
@@ -13812,10 +13352,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_48
; CHECK-RV64-NEXT: .LBB61_574: # %cond.load185
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 48
; CHECK-RV64-NEXT: li a3, 47
@@ -13829,10 +13367,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_49
; CHECK-RV64-NEXT: .LBB61_575: # %cond.load189
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 49
; CHECK-RV64-NEXT: li a3, 48
@@ -13846,10 +13382,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_50
; CHECK-RV64-NEXT: .LBB61_576: # %cond.load193
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 50
; CHECK-RV64-NEXT: li a3, 49
@@ -13863,10 +13397,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_51
; CHECK-RV64-NEXT: .LBB61_577: # %cond.load197
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 51
; CHECK-RV64-NEXT: li a3, 50
@@ -13880,10 +13412,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_52
; CHECK-RV64-NEXT: .LBB61_578: # %cond.load201
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 52
; CHECK-RV64-NEXT: li a3, 51
@@ -13897,10 +13427,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_53
; CHECK-RV64-NEXT: .LBB61_579: # %cond.load205
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 53
; CHECK-RV64-NEXT: li a3, 52
@@ -13914,10 +13442,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_54
; CHECK-RV64-NEXT: .LBB61_580: # %cond.load209
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 54
; CHECK-RV64-NEXT: li a3, 53
@@ -13931,10 +13457,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_55
; CHECK-RV64-NEXT: .LBB61_581: # %cond.load213
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 55
; CHECK-RV64-NEXT: li a3, 54
@@ -13948,10 +13472,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_56
; CHECK-RV64-NEXT: .LBB61_582: # %cond.load217
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 56
; CHECK-RV64-NEXT: li a3, 55
@@ -13965,10 +13487,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_57
; CHECK-RV64-NEXT: .LBB61_583: # %cond.load221
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 57
; CHECK-RV64-NEXT: li a3, 56
@@ -13982,10 +13502,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_58
; CHECK-RV64-NEXT: .LBB61_584: # %cond.load225
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 58
; CHECK-RV64-NEXT: li a3, 57
@@ -13999,10 +13517,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_59
; CHECK-RV64-NEXT: .LBB61_585: # %cond.load229
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 59
; CHECK-RV64-NEXT: li a3, 58
@@ -14016,10 +13532,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_60
; CHECK-RV64-NEXT: .LBB61_586: # %cond.load233
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 60
; CHECK-RV64-NEXT: li a3, 59
@@ -14033,10 +13547,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_61
; CHECK-RV64-NEXT: .LBB61_587: # %cond.load237
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v9, a1
; CHECK-RV64-NEXT: li a1, 61
; CHECK-RV64-NEXT: li a3, 60
@@ -14066,10 +13578,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_67
; CHECK-RV64-NEXT: .LBB61_589: # %cond.load253
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 65
; CHECK-RV64-NEXT: li a3, 64
@@ -14083,10 +13593,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_68
; CHECK-RV64-NEXT: .LBB61_590: # %cond.load257
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 66
; CHECK-RV64-NEXT: li a3, 65
@@ -14100,10 +13608,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_69
; CHECK-RV64-NEXT: .LBB61_591: # %cond.load261
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 67
; CHECK-RV64-NEXT: li a3, 66
@@ -14117,10 +13623,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_70
; CHECK-RV64-NEXT: .LBB61_592: # %cond.load265
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 68
; CHECK-RV64-NEXT: li a3, 67
@@ -14134,10 +13638,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_71
; CHECK-RV64-NEXT: .LBB61_593: # %cond.load269
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 69
; CHECK-RV64-NEXT: li a3, 68
@@ -14151,10 +13653,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_72
; CHECK-RV64-NEXT: .LBB61_594: # %cond.load273
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 70
; CHECK-RV64-NEXT: li a3, 69
@@ -14168,10 +13668,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_73
; CHECK-RV64-NEXT: .LBB61_595: # %cond.load277
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 71
; CHECK-RV64-NEXT: li a3, 70
@@ -14185,10 +13683,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_74
; CHECK-RV64-NEXT: .LBB61_596: # %cond.load281
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 72
; CHECK-RV64-NEXT: li a3, 71
@@ -14202,10 +13698,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_75
; CHECK-RV64-NEXT: .LBB61_597: # %cond.load285
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 73
; CHECK-RV64-NEXT: li a3, 72
@@ -14219,10 +13713,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_76
; CHECK-RV64-NEXT: .LBB61_598: # %cond.load289
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 74
; CHECK-RV64-NEXT: li a3, 73
@@ -14236,10 +13728,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_77
; CHECK-RV64-NEXT: .LBB61_599: # %cond.load293
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 75
; CHECK-RV64-NEXT: li a3, 74
@@ -14253,10 +13743,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_78
; CHECK-RV64-NEXT: .LBB61_600: # %cond.load297
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 76
; CHECK-RV64-NEXT: li a3, 75
@@ -14270,10 +13758,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_79
; CHECK-RV64-NEXT: .LBB61_601: # %cond.load301
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 77
; CHECK-RV64-NEXT: li a3, 76
@@ -14287,10 +13773,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_80
; CHECK-RV64-NEXT: .LBB61_602: # %cond.load305
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 78
; CHECK-RV64-NEXT: li a3, 77
@@ -14304,10 +13788,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_81
; CHECK-RV64-NEXT: .LBB61_603: # %cond.load309
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 79
; CHECK-RV64-NEXT: li a3, 78
@@ -14321,10 +13803,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_82
; CHECK-RV64-NEXT: .LBB61_604: # %cond.load313
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 80
; CHECK-RV64-NEXT: li a3, 79
@@ -14338,10 +13818,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_83
; CHECK-RV64-NEXT: .LBB61_605: # %cond.load317
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 81
; CHECK-RV64-NEXT: li a3, 80
@@ -14355,10 +13833,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_84
; CHECK-RV64-NEXT: .LBB61_606: # %cond.load321
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 82
; CHECK-RV64-NEXT: li a3, 81
@@ -14372,10 +13848,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_85
; CHECK-RV64-NEXT: .LBB61_607: # %cond.load325
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 83
; CHECK-RV64-NEXT: li a3, 82
@@ -14389,10 +13863,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_86
; CHECK-RV64-NEXT: .LBB61_608: # %cond.load329
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 84
; CHECK-RV64-NEXT: li a3, 83
@@ -14406,10 +13878,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_87
; CHECK-RV64-NEXT: .LBB61_609: # %cond.load333
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 85
; CHECK-RV64-NEXT: li a3, 84
@@ -14423,10 +13893,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_88
; CHECK-RV64-NEXT: .LBB61_610: # %cond.load337
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 86
; CHECK-RV64-NEXT: li a3, 85
@@ -14440,10 +13908,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_89
; CHECK-RV64-NEXT: .LBB61_611: # %cond.load341
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 87
; CHECK-RV64-NEXT: li a3, 86
@@ -14457,10 +13923,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_90
; CHECK-RV64-NEXT: .LBB61_612: # %cond.load345
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 88
; CHECK-RV64-NEXT: li a3, 87
@@ -14474,10 +13938,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_91
; CHECK-RV64-NEXT: .LBB61_613: # %cond.load349
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 89
; CHECK-RV64-NEXT: li a3, 88
@@ -14491,10 +13953,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_92
; CHECK-RV64-NEXT: .LBB61_614: # %cond.load353
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 90
; CHECK-RV64-NEXT: li a3, 89
@@ -14508,10 +13968,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_93
; CHECK-RV64-NEXT: .LBB61_615: # %cond.load357
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 91
; CHECK-RV64-NEXT: li a3, 90
@@ -14525,10 +13983,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_94
; CHECK-RV64-NEXT: .LBB61_616: # %cond.load361
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 92
; CHECK-RV64-NEXT: li a3, 91
@@ -14542,10 +13998,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_95
; CHECK-RV64-NEXT: .LBB61_617: # %cond.load365
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 93
; CHECK-RV64-NEXT: li a3, 92
@@ -14559,10 +14013,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_96
; CHECK-RV64-NEXT: .LBB61_618: # %cond.load369
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 94
; CHECK-RV64-NEXT: li a3, 93
@@ -14576,10 +14028,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_97
; CHECK-RV64-NEXT: .LBB61_619: # %cond.load373
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 95
; CHECK-RV64-NEXT: li a3, 94
@@ -14593,10 +14043,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_98
; CHECK-RV64-NEXT: .LBB61_620: # %cond.load377
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 96
; CHECK-RV64-NEXT: li a3, 95
@@ -14610,10 +14058,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_99
; CHECK-RV64-NEXT: .LBB61_621: # %cond.load381
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 97
; CHECK-RV64-NEXT: li a3, 96
@@ -14627,10 +14073,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_100
; CHECK-RV64-NEXT: .LBB61_622: # %cond.load385
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 98
; CHECK-RV64-NEXT: li a3, 97
@@ -14644,10 +14088,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_101
; CHECK-RV64-NEXT: .LBB61_623: # %cond.load389
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 99
; CHECK-RV64-NEXT: li a3, 98
@@ -14661,10 +14103,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_102
; CHECK-RV64-NEXT: .LBB61_624: # %cond.load393
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 100
; CHECK-RV64-NEXT: li a3, 99
@@ -14678,10 +14118,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_103
; CHECK-RV64-NEXT: .LBB61_625: # %cond.load397
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 101
; CHECK-RV64-NEXT: li a3, 100
@@ -14695,10 +14133,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_104
; CHECK-RV64-NEXT: .LBB61_626: # %cond.load401
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 102
; CHECK-RV64-NEXT: li a3, 101
@@ -14712,10 +14148,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_105
; CHECK-RV64-NEXT: .LBB61_627: # %cond.load405
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 103
; CHECK-RV64-NEXT: li a3, 102
@@ -14729,10 +14163,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_106
; CHECK-RV64-NEXT: .LBB61_628: # %cond.load409
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 104
; CHECK-RV64-NEXT: li a3, 103
@@ -14746,10 +14178,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_107
; CHECK-RV64-NEXT: .LBB61_629: # %cond.load413
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 105
; CHECK-RV64-NEXT: li a3, 104
@@ -14763,10 +14193,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_108
; CHECK-RV64-NEXT: .LBB61_630: # %cond.load417
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 106
; CHECK-RV64-NEXT: li a3, 105
@@ -14780,10 +14208,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_109
; CHECK-RV64-NEXT: .LBB61_631: # %cond.load421
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 107
; CHECK-RV64-NEXT: li a3, 106
@@ -14797,10 +14223,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_110
; CHECK-RV64-NEXT: .LBB61_632: # %cond.load425
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 108
; CHECK-RV64-NEXT: li a3, 107
@@ -14814,10 +14238,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_111
; CHECK-RV64-NEXT: .LBB61_633: # %cond.load429
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 109
; CHECK-RV64-NEXT: li a3, 108
@@ -14831,10 +14253,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_112
; CHECK-RV64-NEXT: .LBB61_634: # %cond.load433
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 110
; CHECK-RV64-NEXT: li a3, 109
@@ -14848,10 +14268,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_113
; CHECK-RV64-NEXT: .LBB61_635: # %cond.load437
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 111
; CHECK-RV64-NEXT: li a3, 110
@@ -14865,10 +14283,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_114
; CHECK-RV64-NEXT: .LBB61_636: # %cond.load441
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 112
; CHECK-RV64-NEXT: li a3, 111
@@ -14882,10 +14298,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_115
; CHECK-RV64-NEXT: .LBB61_637: # %cond.load445
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 113
; CHECK-RV64-NEXT: li a3, 112
@@ -14899,10 +14313,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_116
; CHECK-RV64-NEXT: .LBB61_638: # %cond.load449
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 114
; CHECK-RV64-NEXT: li a3, 113
@@ -14916,10 +14328,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_117
; CHECK-RV64-NEXT: .LBB61_639: # %cond.load453
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 115
; CHECK-RV64-NEXT: li a3, 114
@@ -14933,10 +14343,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_118
; CHECK-RV64-NEXT: .LBB61_640: # %cond.load457
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 116
; CHECK-RV64-NEXT: li a3, 115
@@ -14950,10 +14358,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_119
; CHECK-RV64-NEXT: .LBB61_641: # %cond.load461
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 117
; CHECK-RV64-NEXT: li a3, 116
@@ -14967,10 +14373,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_120
; CHECK-RV64-NEXT: .LBB61_642: # %cond.load465
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 118
; CHECK-RV64-NEXT: li a3, 117
@@ -14984,10 +14388,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_121
; CHECK-RV64-NEXT: .LBB61_643: # %cond.load469
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 119
; CHECK-RV64-NEXT: li a3, 118
@@ -15001,10 +14403,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_122
; CHECK-RV64-NEXT: .LBB61_644: # %cond.load473
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 120
; CHECK-RV64-NEXT: li a3, 119
@@ -15018,10 +14418,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_123
; CHECK-RV64-NEXT: .LBB61_645: # %cond.load477
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 121
; CHECK-RV64-NEXT: li a3, 120
@@ -15035,10 +14433,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_124
; CHECK-RV64-NEXT: .LBB61_646: # %cond.load481
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 122
; CHECK-RV64-NEXT: li a3, 121
@@ -15052,10 +14448,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_125
; CHECK-RV64-NEXT: .LBB61_647: # %cond.load485
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 123
; CHECK-RV64-NEXT: li a3, 122
@@ -15069,10 +14463,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_126
; CHECK-RV64-NEXT: .LBB61_648: # %cond.load489
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 124
; CHECK-RV64-NEXT: li a3, 123
@@ -15086,10 +14478,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_127
; CHECK-RV64-NEXT: .LBB61_649: # %cond.load493
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v10, a2
; CHECK-RV64-NEXT: li a2, 125
; CHECK-RV64-NEXT: li a3, 124
@@ -15119,10 +14509,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_133
; CHECK-RV64-NEXT: .LBB61_651: # %cond.load509
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 129
; CHECK-RV64-NEXT: li a3, 128
@@ -15136,10 +14524,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_134
; CHECK-RV64-NEXT: .LBB61_652: # %cond.load513
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 130
; CHECK-RV64-NEXT: li a3, 129
@@ -15153,10 +14539,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_135
; CHECK-RV64-NEXT: .LBB61_653: # %cond.load517
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 131
; CHECK-RV64-NEXT: li a3, 130
@@ -15170,10 +14554,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_136
; CHECK-RV64-NEXT: .LBB61_654: # %cond.load521
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 132
; CHECK-RV64-NEXT: li a3, 131
@@ -15187,10 +14569,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_137
; CHECK-RV64-NEXT: .LBB61_655: # %cond.load525
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 133
; CHECK-RV64-NEXT: li a3, 132
@@ -15204,10 +14584,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_138
; CHECK-RV64-NEXT: .LBB61_656: # %cond.load529
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 134
; CHECK-RV64-NEXT: li a3, 133
@@ -15221,10 +14599,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_139
; CHECK-RV64-NEXT: .LBB61_657: # %cond.load533
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 135
; CHECK-RV64-NEXT: li a3, 134
@@ -15238,10 +14614,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_140
; CHECK-RV64-NEXT: .LBB61_658: # %cond.load537
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 136
; CHECK-RV64-NEXT: li a3, 135
@@ -15255,10 +14629,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_141
; CHECK-RV64-NEXT: .LBB61_659: # %cond.load541
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 137
; CHECK-RV64-NEXT: li a3, 136
@@ -15272,10 +14644,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_142
; CHECK-RV64-NEXT: .LBB61_660: # %cond.load545
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 138
; CHECK-RV64-NEXT: li a3, 137
@@ -15289,10 +14659,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_143
; CHECK-RV64-NEXT: .LBB61_661: # %cond.load549
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 139
; CHECK-RV64-NEXT: li a3, 138
@@ -15306,10 +14674,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_144
; CHECK-RV64-NEXT: .LBB61_662: # %cond.load553
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 140
; CHECK-RV64-NEXT: li a3, 139
@@ -15323,10 +14689,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_145
; CHECK-RV64-NEXT: .LBB61_663: # %cond.load557
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 141
; CHECK-RV64-NEXT: li a3, 140
@@ -15340,10 +14704,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_146
; CHECK-RV64-NEXT: .LBB61_664: # %cond.load561
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 142
; CHECK-RV64-NEXT: li a3, 141
@@ -15357,10 +14719,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_147
; CHECK-RV64-NEXT: .LBB61_665: # %cond.load565
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 143
; CHECK-RV64-NEXT: li a3, 142
@@ -15374,10 +14734,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_148
; CHECK-RV64-NEXT: .LBB61_666: # %cond.load569
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 144
; CHECK-RV64-NEXT: li a3, 143
@@ -15391,10 +14749,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_149
; CHECK-RV64-NEXT: .LBB61_667: # %cond.load573
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 145
; CHECK-RV64-NEXT: li a3, 144
@@ -15408,10 +14764,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_150
; CHECK-RV64-NEXT: .LBB61_668: # %cond.load577
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 146
; CHECK-RV64-NEXT: li a3, 145
@@ -15425,10 +14779,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_151
; CHECK-RV64-NEXT: .LBB61_669: # %cond.load581
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 147
; CHECK-RV64-NEXT: li a3, 146
@@ -15442,10 +14794,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_152
; CHECK-RV64-NEXT: .LBB61_670: # %cond.load585
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 148
; CHECK-RV64-NEXT: li a3, 147
@@ -15459,10 +14809,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_153
; CHECK-RV64-NEXT: .LBB61_671: # %cond.load589
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 149
; CHECK-RV64-NEXT: li a3, 148
@@ -15476,10 +14824,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_154
; CHECK-RV64-NEXT: .LBB61_672: # %cond.load593
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 150
; CHECK-RV64-NEXT: li a3, 149
@@ -15493,10 +14839,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_155
; CHECK-RV64-NEXT: .LBB61_673: # %cond.load597
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 151
; CHECK-RV64-NEXT: li a3, 150
@@ -15510,10 +14854,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_156
; CHECK-RV64-NEXT: .LBB61_674: # %cond.load601
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 152
; CHECK-RV64-NEXT: li a3, 151
@@ -15527,10 +14869,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_157
; CHECK-RV64-NEXT: .LBB61_675: # %cond.load605
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 153
; CHECK-RV64-NEXT: li a3, 152
@@ -15544,10 +14884,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_158
; CHECK-RV64-NEXT: .LBB61_676: # %cond.load609
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 154
; CHECK-RV64-NEXT: li a3, 153
@@ -15561,10 +14899,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_159
; CHECK-RV64-NEXT: .LBB61_677: # %cond.load613
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 155
; CHECK-RV64-NEXT: li a3, 154
@@ -15578,10 +14914,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_160
; CHECK-RV64-NEXT: .LBB61_678: # %cond.load617
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 156
; CHECK-RV64-NEXT: li a3, 155
@@ -15595,10 +14929,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_161
; CHECK-RV64-NEXT: .LBB61_679: # %cond.load621
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 157
; CHECK-RV64-NEXT: li a3, 156
@@ -15612,10 +14944,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_162
; CHECK-RV64-NEXT: .LBB61_680: # %cond.load625
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 158
; CHECK-RV64-NEXT: li a3, 157
@@ -15629,10 +14959,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_163
; CHECK-RV64-NEXT: .LBB61_681: # %cond.load629
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 159
; CHECK-RV64-NEXT: li a3, 158
@@ -15646,10 +14974,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_164
; CHECK-RV64-NEXT: .LBB61_682: # %cond.load633
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 160
; CHECK-RV64-NEXT: li a3, 159
@@ -15663,10 +14989,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_165
; CHECK-RV64-NEXT: .LBB61_683: # %cond.load637
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 161
; CHECK-RV64-NEXT: li a3, 160
@@ -15680,10 +15004,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_166
; CHECK-RV64-NEXT: .LBB61_684: # %cond.load641
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 162
; CHECK-RV64-NEXT: li a3, 161
@@ -15697,10 +15019,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_167
; CHECK-RV64-NEXT: .LBB61_685: # %cond.load645
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 163
; CHECK-RV64-NEXT: li a3, 162
@@ -15714,10 +15034,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_168
; CHECK-RV64-NEXT: .LBB61_686: # %cond.load649
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 164
; CHECK-RV64-NEXT: li a3, 163
@@ -15731,10 +15049,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_169
; CHECK-RV64-NEXT: .LBB61_687: # %cond.load653
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 165
; CHECK-RV64-NEXT: li a3, 164
@@ -15748,10 +15064,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_170
; CHECK-RV64-NEXT: .LBB61_688: # %cond.load657
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 166
; CHECK-RV64-NEXT: li a3, 165
@@ -15765,10 +15079,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_171
; CHECK-RV64-NEXT: .LBB61_689: # %cond.load661
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 167
; CHECK-RV64-NEXT: li a3, 166
@@ -15782,10 +15094,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_172
; CHECK-RV64-NEXT: .LBB61_690: # %cond.load665
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 168
; CHECK-RV64-NEXT: li a3, 167
@@ -15799,10 +15109,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_173
; CHECK-RV64-NEXT: .LBB61_691: # %cond.load669
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 169
; CHECK-RV64-NEXT: li a3, 168
@@ -15816,10 +15124,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_174
; CHECK-RV64-NEXT: .LBB61_692: # %cond.load673
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 170
; CHECK-RV64-NEXT: li a3, 169
@@ -15833,10 +15139,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_175
; CHECK-RV64-NEXT: .LBB61_693: # %cond.load677
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 171
; CHECK-RV64-NEXT: li a3, 170
@@ -15850,10 +15154,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_176
; CHECK-RV64-NEXT: .LBB61_694: # %cond.load681
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 172
; CHECK-RV64-NEXT: li a3, 171
@@ -15867,10 +15169,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_177
; CHECK-RV64-NEXT: .LBB61_695: # %cond.load685
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 173
; CHECK-RV64-NEXT: li a3, 172
@@ -15884,10 +15184,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_178
; CHECK-RV64-NEXT: .LBB61_696: # %cond.load689
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 174
; CHECK-RV64-NEXT: li a3, 173
@@ -15901,10 +15199,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_179
; CHECK-RV64-NEXT: .LBB61_697: # %cond.load693
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 175
; CHECK-RV64-NEXT: li a3, 174
@@ -15918,10 +15214,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_180
; CHECK-RV64-NEXT: .LBB61_698: # %cond.load697
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 176
; CHECK-RV64-NEXT: li a3, 175
@@ -15935,10 +15229,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_181
; CHECK-RV64-NEXT: .LBB61_699: # %cond.load701
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 177
; CHECK-RV64-NEXT: li a3, 176
@@ -15952,10 +15244,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_182
; CHECK-RV64-NEXT: .LBB61_700: # %cond.load705
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 178
; CHECK-RV64-NEXT: li a3, 177
@@ -15969,10 +15259,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_183
; CHECK-RV64-NEXT: .LBB61_701: # %cond.load709
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 179
; CHECK-RV64-NEXT: li a3, 178
@@ -15986,10 +15274,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_184
; CHECK-RV64-NEXT: .LBB61_702: # %cond.load713
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 180
; CHECK-RV64-NEXT: li a3, 179
@@ -16003,10 +15289,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_185
; CHECK-RV64-NEXT: .LBB61_703: # %cond.load717
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 181
; CHECK-RV64-NEXT: li a3, 180
@@ -16020,10 +15304,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_186
; CHECK-RV64-NEXT: .LBB61_704: # %cond.load721
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 182
; CHECK-RV64-NEXT: li a3, 181
@@ -16037,10 +15319,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_187
; CHECK-RV64-NEXT: .LBB61_705: # %cond.load725
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 183
; CHECK-RV64-NEXT: li a3, 182
@@ -16054,10 +15334,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_188
; CHECK-RV64-NEXT: .LBB61_706: # %cond.load729
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 184
; CHECK-RV64-NEXT: li a3, 183
@@ -16071,10 +15349,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_189
; CHECK-RV64-NEXT: .LBB61_707: # %cond.load733
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 185
; CHECK-RV64-NEXT: li a3, 184
@@ -16088,10 +15364,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_190
; CHECK-RV64-NEXT: .LBB61_708: # %cond.load737
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 186
; CHECK-RV64-NEXT: li a3, 185
@@ -16105,10 +15379,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_191
; CHECK-RV64-NEXT: .LBB61_709: # %cond.load741
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 187
; CHECK-RV64-NEXT: li a3, 186
@@ -16122,10 +15394,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_192
; CHECK-RV64-NEXT: .LBB61_710: # %cond.load745
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 188
; CHECK-RV64-NEXT: li a3, 187
@@ -16139,10 +15409,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_193
; CHECK-RV64-NEXT: .LBB61_711: # %cond.load749
; CHECK-RV64-NEXT: lbu a1, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a1
; CHECK-RV64-NEXT: li a1, 189
; CHECK-RV64-NEXT: li a3, 188
@@ -16172,10 +15440,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_199
; CHECK-RV64-NEXT: .LBB61_713: # %cond.load765
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 193
; CHECK-RV64-NEXT: li a3, 192
@@ -16189,10 +15455,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_200
; CHECK-RV64-NEXT: .LBB61_714: # %cond.load769
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 194
; CHECK-RV64-NEXT: li a3, 193
@@ -16206,10 +15470,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_201
; CHECK-RV64-NEXT: .LBB61_715: # %cond.load773
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 195
; CHECK-RV64-NEXT: li a3, 194
@@ -16223,10 +15485,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_202
; CHECK-RV64-NEXT: .LBB61_716: # %cond.load777
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 196
; CHECK-RV64-NEXT: li a3, 195
@@ -16240,10 +15500,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_203
; CHECK-RV64-NEXT: .LBB61_717: # %cond.load781
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 197
; CHECK-RV64-NEXT: li a3, 196
@@ -16257,10 +15515,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_204
; CHECK-RV64-NEXT: .LBB61_718: # %cond.load785
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 198
; CHECK-RV64-NEXT: li a3, 197
@@ -16274,10 +15530,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_205
; CHECK-RV64-NEXT: .LBB61_719: # %cond.load789
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 199
; CHECK-RV64-NEXT: li a3, 198
@@ -16291,10 +15545,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_206
; CHECK-RV64-NEXT: .LBB61_720: # %cond.load793
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 200
; CHECK-RV64-NEXT: li a3, 199
@@ -16308,10 +15560,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_207
; CHECK-RV64-NEXT: .LBB61_721: # %cond.load797
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 201
; CHECK-RV64-NEXT: li a3, 200
@@ -16325,10 +15575,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_208
; CHECK-RV64-NEXT: .LBB61_722: # %cond.load801
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 202
; CHECK-RV64-NEXT: li a3, 201
@@ -16342,10 +15590,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_209
; CHECK-RV64-NEXT: .LBB61_723: # %cond.load805
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 203
; CHECK-RV64-NEXT: li a3, 202
@@ -16359,10 +15605,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_210
; CHECK-RV64-NEXT: .LBB61_724: # %cond.load809
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 204
; CHECK-RV64-NEXT: li a3, 203
@@ -16376,10 +15620,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_211
; CHECK-RV64-NEXT: .LBB61_725: # %cond.load813
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 205
; CHECK-RV64-NEXT: li a3, 204
@@ -16393,10 +15635,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_212
; CHECK-RV64-NEXT: .LBB61_726: # %cond.load817
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 206
; CHECK-RV64-NEXT: li a3, 205
@@ -16410,10 +15650,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_213
; CHECK-RV64-NEXT: .LBB61_727: # %cond.load821
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 207
; CHECK-RV64-NEXT: li a3, 206
@@ -16427,10 +15665,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_214
; CHECK-RV64-NEXT: .LBB61_728: # %cond.load825
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 208
; CHECK-RV64-NEXT: li a3, 207
@@ -16444,10 +15680,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_215
; CHECK-RV64-NEXT: .LBB61_729: # %cond.load829
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 209
; CHECK-RV64-NEXT: li a3, 208
@@ -16461,10 +15695,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_216
; CHECK-RV64-NEXT: .LBB61_730: # %cond.load833
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 210
; CHECK-RV64-NEXT: li a3, 209
@@ -16478,10 +15710,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_217
; CHECK-RV64-NEXT: .LBB61_731: # %cond.load837
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 211
; CHECK-RV64-NEXT: li a3, 210
@@ -16495,10 +15725,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_218
; CHECK-RV64-NEXT: .LBB61_732: # %cond.load841
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 212
; CHECK-RV64-NEXT: li a3, 211
@@ -16512,10 +15740,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_219
; CHECK-RV64-NEXT: .LBB61_733: # %cond.load845
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 213
; CHECK-RV64-NEXT: li a3, 212
@@ -16529,10 +15755,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_220
; CHECK-RV64-NEXT: .LBB61_734: # %cond.load849
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 214
; CHECK-RV64-NEXT: li a3, 213
@@ -16546,10 +15770,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_221
; CHECK-RV64-NEXT: .LBB61_735: # %cond.load853
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 215
; CHECK-RV64-NEXT: li a3, 214
@@ -16563,10 +15785,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_222
; CHECK-RV64-NEXT: .LBB61_736: # %cond.load857
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 216
; CHECK-RV64-NEXT: li a3, 215
@@ -16580,10 +15800,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_223
; CHECK-RV64-NEXT: .LBB61_737: # %cond.load861
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 217
; CHECK-RV64-NEXT: li a3, 216
@@ -16597,10 +15815,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_224
; CHECK-RV64-NEXT: .LBB61_738: # %cond.load865
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 218
; CHECK-RV64-NEXT: li a3, 217
@@ -16614,10 +15830,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_225
; CHECK-RV64-NEXT: .LBB61_739: # %cond.load869
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 219
; CHECK-RV64-NEXT: li a3, 218
@@ -16631,10 +15845,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_226
; CHECK-RV64-NEXT: .LBB61_740: # %cond.load873
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 220
; CHECK-RV64-NEXT: li a3, 219
@@ -16648,10 +15860,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_227
; CHECK-RV64-NEXT: .LBB61_741: # %cond.load877
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 221
; CHECK-RV64-NEXT: li a3, 220
@@ -16665,10 +15875,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_228
; CHECK-RV64-NEXT: .LBB61_742: # %cond.load881
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 222
; CHECK-RV64-NEXT: li a3, 221
@@ -16682,10 +15890,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_229
; CHECK-RV64-NEXT: .LBB61_743: # %cond.load885
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 223
; CHECK-RV64-NEXT: li a3, 222
@@ -16699,10 +15905,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_230
; CHECK-RV64-NEXT: .LBB61_744: # %cond.load889
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 224
; CHECK-RV64-NEXT: li a3, 223
@@ -16716,10 +15920,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_231
; CHECK-RV64-NEXT: .LBB61_745: # %cond.load893
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 225
; CHECK-RV64-NEXT: li a3, 224
@@ -16733,10 +15935,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_232
; CHECK-RV64-NEXT: .LBB61_746: # %cond.load897
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 226
; CHECK-RV64-NEXT: li a3, 225
@@ -16750,10 +15950,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_233
; CHECK-RV64-NEXT: .LBB61_747: # %cond.load901
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 227
; CHECK-RV64-NEXT: li a3, 226
@@ -16767,10 +15965,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_234
; CHECK-RV64-NEXT: .LBB61_748: # %cond.load905
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 228
; CHECK-RV64-NEXT: li a3, 227
@@ -16784,10 +15980,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_235
; CHECK-RV64-NEXT: .LBB61_749: # %cond.load909
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 229
; CHECK-RV64-NEXT: li a3, 228
@@ -16801,10 +15995,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_236
; CHECK-RV64-NEXT: .LBB61_750: # %cond.load913
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 230
; CHECK-RV64-NEXT: li a3, 229
@@ -16818,10 +16010,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_237
; CHECK-RV64-NEXT: .LBB61_751: # %cond.load917
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 231
; CHECK-RV64-NEXT: li a3, 230
@@ -16835,10 +16025,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_238
; CHECK-RV64-NEXT: .LBB61_752: # %cond.load921
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 232
; CHECK-RV64-NEXT: li a3, 231
@@ -16852,10 +16040,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_239
; CHECK-RV64-NEXT: .LBB61_753: # %cond.load925
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 233
; CHECK-RV64-NEXT: li a3, 232
@@ -16869,10 +16055,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_240
; CHECK-RV64-NEXT: .LBB61_754: # %cond.load929
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 234
; CHECK-RV64-NEXT: li a3, 233
@@ -16886,10 +16070,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_241
; CHECK-RV64-NEXT: .LBB61_755: # %cond.load933
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 235
; CHECK-RV64-NEXT: li a3, 234
@@ -16903,10 +16085,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_242
; CHECK-RV64-NEXT: .LBB61_756: # %cond.load937
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 236
; CHECK-RV64-NEXT: li a3, 235
@@ -16920,10 +16100,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_243
; CHECK-RV64-NEXT: .LBB61_757: # %cond.load941
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 237
; CHECK-RV64-NEXT: li a3, 236
@@ -16937,10 +16115,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_244
; CHECK-RV64-NEXT: .LBB61_758: # %cond.load945
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 238
; CHECK-RV64-NEXT: li a3, 237
@@ -16954,10 +16130,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_245
; CHECK-RV64-NEXT: .LBB61_759: # %cond.load949
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 239
; CHECK-RV64-NEXT: li a3, 238
@@ -16971,10 +16145,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_246
; CHECK-RV64-NEXT: .LBB61_760: # %cond.load953
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 240
; CHECK-RV64-NEXT: li a3, 239
@@ -16988,10 +16160,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_247
; CHECK-RV64-NEXT: .LBB61_761: # %cond.load957
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 241
; CHECK-RV64-NEXT: li a3, 240
@@ -17005,10 +16175,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_248
; CHECK-RV64-NEXT: .LBB61_762: # %cond.load961
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 242
; CHECK-RV64-NEXT: li a3, 241
@@ -17022,10 +16190,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_249
; CHECK-RV64-NEXT: .LBB61_763: # %cond.load965
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 243
; CHECK-RV64-NEXT: li a3, 242
@@ -17039,10 +16205,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_250
; CHECK-RV64-NEXT: .LBB61_764: # %cond.load969
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 244
; CHECK-RV64-NEXT: li a3, 243
@@ -17056,10 +16220,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_251
; CHECK-RV64-NEXT: .LBB61_765: # %cond.load973
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 245
; CHECK-RV64-NEXT: li a3, 244
@@ -17073,10 +16235,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_252
; CHECK-RV64-NEXT: .LBB61_766: # %cond.load977
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 246
; CHECK-RV64-NEXT: li a3, 245
@@ -17090,10 +16250,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_253
; CHECK-RV64-NEXT: .LBB61_767: # %cond.load981
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 247
; CHECK-RV64-NEXT: li a3, 246
@@ -17107,10 +16265,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_254
; CHECK-RV64-NEXT: .LBB61_768: # %cond.load985
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 248
; CHECK-RV64-NEXT: li a3, 247
@@ -17124,10 +16280,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_255
; CHECK-RV64-NEXT: .LBB61_769: # %cond.load989
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 249
; CHECK-RV64-NEXT: li a3, 248
@@ -17141,10 +16295,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_256
; CHECK-RV64-NEXT: .LBB61_770: # %cond.load993
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 250
; CHECK-RV64-NEXT: li a3, 249
@@ -17158,10 +16310,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_257
; CHECK-RV64-NEXT: .LBB61_771: # %cond.load997
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 251
; CHECK-RV64-NEXT: li a3, 250
@@ -17175,10 +16325,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_258
; CHECK-RV64-NEXT: .LBB61_772: # %cond.load1001
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 252
; CHECK-RV64-NEXT: li a3, 251
@@ -17192,10 +16340,8 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
; CHECK-RV64-NEXT: j .LBB61_259
; CHECK-RV64-NEXT: .LBB61_773: # %cond.load1005
; CHECK-RV64-NEXT: lbu a2, 0(a0)
-; CHECK-RV64-NEXT: li a3, 512
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv8r.v v16, v8
-; CHECK-RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv.s.x v12, a2
; CHECK-RV64-NEXT: li a2, 253
; CHECK-RV64-NEXT: li a3, 252
diff --git a/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll
index 5a974f88ed8f38..83637e4a71d45b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extract-subvector.ll
@@ -13,7 +13,7 @@ define <vscale x 4 x i32> @extract_nxv8i32_nxv4i32_0(<vscale x 8 x i32> %vec) {
define <vscale x 4 x i32> @extract_nxv8i32_nxv4i32_4(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv4i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv8i32(<vscale x 8 x i32> %vec, i64 4)
@@ -31,7 +31,7 @@ define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_0(<vscale x 8 x i32> %vec) {
define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv2i32_2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, i64 2)
@@ -41,7 +41,7 @@ define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec) {
define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv2i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, i64 4)
@@ -51,7 +51,7 @@ define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec) {
define <vscale x 2 x i32> @extract_nxv8i32_nxv2i32_6(<vscale x 8 x i32> %vec) {
; CHECK-LABEL: extract_nxv8i32_nxv2i32_6:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v11
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, i64 6)
@@ -69,7 +69,7 @@ define <vscale x 8 x i32> @extract_nxv16i32_nxv8i32_0(<vscale x 16 x i32> %vec)
define <vscale x 8 x i32> @extract_nxv16i32_nxv8i32_8(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv8i32_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: ret
%c = call <vscale x 8 x i32> @llvm.vector.extract.nxv8i32.nxv16i32(<vscale x 16 x i32> %vec, i64 8)
@@ -87,7 +87,7 @@ define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_0(<vscale x 16 x i32> %vec)
define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv4i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, i64 4)
@@ -97,7 +97,7 @@ define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec)
define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv4i32_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v12
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, i64 8)
@@ -107,7 +107,7 @@ define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec)
define <vscale x 4 x i32> @extract_nxv16i32_nxv4i32_12(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv4i32_12:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v14
; CHECK-NEXT: ret
%c = call <vscale x 4 x i32> @llvm.vector.extract.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, i64 12)
@@ -125,7 +125,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_0(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 2)
@@ -135,7 +135,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v10
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 4)
@@ -145,7 +145,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_6:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v11
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 6)
@@ -155,7 +155,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v12
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 8)
@@ -165,7 +165,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_10:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v13
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 10)
@@ -175,7 +175,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_12:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v14
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 12)
@@ -185,7 +185,7 @@ define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec)
define <vscale x 2 x i32> @extract_nxv16i32_nxv2i32_14(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv2i32_14:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v15
; CHECK-NEXT: ret
%c = call <vscale x 2 x i32> @llvm.vector.extract.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, i64 14)
@@ -239,7 +239,7 @@ define <vscale x 1 x i32> @extract_nxv16i32_nxv1i32_15(<vscale x 16 x i32> %vec)
define <vscale x 1 x i32> @extract_nxv16i32_nxv1i32_2(<vscale x 16 x i32> %vec) {
; CHECK-LABEL: extract_nxv16i32_nxv1i32_2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 1 x i32> @llvm.vector.extract.nxv1i32.nxv16i32(<vscale x 16 x i32> %vec, i64 2)
@@ -303,7 +303,7 @@ define <vscale x 2 x i8> @extract_nxv32i8_nxv2i8_6(<vscale x 32 x i8> %vec) {
define <vscale x 2 x i8> @extract_nxv32i8_nxv2i8_8(<vscale x 32 x i8> %vec) {
; CHECK-LABEL: extract_nxv32i8_nxv2i8_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x i8> @llvm.vector.extract.nxv2i8.nxv32i8(<vscale x 32 x i8> %vec, i64 8)
@@ -374,7 +374,7 @@ define <vscale x 2 x half> @extract_nxv2f16_nxv16f16_2(<vscale x 16 x half> %vec
define <vscale x 2 x half> @extract_nxv2f16_nxv16f16_4(<vscale x 16 x half> %vec) {
; CHECK-LABEL: extract_nxv2f16_nxv16f16_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x half> @llvm.vector.extract.nxv2f16.nxv16f16(<vscale x 16 x half> %vec, i64 4)
@@ -522,7 +522,7 @@ define <vscale x 2 x bfloat> @extract_nxv2bf16_nxv16bf16_2(<vscale x 16 x bfloat
define <vscale x 2 x bfloat> @extract_nxv2bf16_nxv16bf16_4(<vscale x 16 x bfloat> %vec) {
; CHECK-LABEL: extract_nxv2bf16_nxv16bf16_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%c = call <vscale x 2 x bfloat> @llvm.vector.extract.nxv2bf16.nxv16bf16(<vscale x 16 x bfloat> %vec, i64 4)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
index 226cce4dbaf09f..d4aa05156b66cd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
@@ -1659,7 +1659,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v8
; RV32-NEXT: lui a2, 1044480
; RV32-NEXT: lui a3, 61681
@@ -2056,7 +2056,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v8
; RV32-NEXT: lui a2, 1044480
; RV32-NEXT: lui a3, 61681
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll
index e2ce999eb0f77f..60a9948198c8f5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv-fastcc.ll
@@ -180,7 +180,7 @@ define fastcc <32 x i32> @ret_v32i32_call_v32i32_v32i32_i32(<32 x i32> %x, <32 x
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; CHECK-NEXT: .cfi_offset ra, -8
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a1, 2
; CHECK-NEXT: vmv8r.v v8, v16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll
index d83540281b415b..f42b4a3a26aad0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-calling-conv.ll
@@ -180,7 +180,7 @@ define <32 x i32> @ret_v32i32_call_v32i32_v32i32_i32(<32 x i32> %x, <32 x i32> %
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; CHECK-NEXT: .cfi_offset ra, -8
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v24, v8
; CHECK-NEXT: li a1, 2
; CHECK-NEXT: vmv8r.v v8, v16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
index d40c672cdb6a1c..ca80a314c680a9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
@@ -261,7 +261,7 @@ declare <16 x half> @llvm.vp.ceil.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_ceil_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -432,7 +432,7 @@ declare <8 x float> @llvm.vp.ceil.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_ceil_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -477,7 +477,7 @@ declare <16 x float> @llvm.vp.ceil.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_ceil_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -564,7 +564,7 @@ declare <4 x double> @llvm.vp.ceil.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_ceil_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -609,7 +609,7 @@ declare <8 x double> @llvm.vp.ceil.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_ceil_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -654,7 +654,7 @@ declare <15 x double> @llvm.vp.ceil.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_ceil_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -699,7 +699,7 @@ declare <16 x double> @llvm.vp.ceil.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_ceil_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
index bbf97891fd9b10..a5a10618424274 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
@@ -1796,7 +1796,7 @@ define <32 x i64> @vp_ctpop_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv8r.v v24, v16
; RV32-NEXT: lui a1, 349525
; RV32-NEXT: lui a2, 209715
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
index 9bce8d4960e89f..9c8d9879b2f07f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
@@ -261,7 +261,7 @@ declare <16 x half> @llvm.vp.floor.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_floor_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -432,7 +432,7 @@ declare <8 x float> @llvm.vp.floor.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_floor_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -477,7 +477,7 @@ declare <16 x float> @llvm.vp.floor.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_floor_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -564,7 +564,7 @@ declare <4 x double> @llvm.vp.floor.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_floor_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -609,7 +609,7 @@ declare <8 x double> @llvm.vp.floor.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_floor_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -654,7 +654,7 @@ declare <15 x double> @llvm.vp.floor.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_floor_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -699,7 +699,7 @@ declare <16 x double> @llvm.vp.floor.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_floor_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
index ef6a429d391a62..adeaf2390f1ef3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
@@ -13,7 +13,7 @@ declare <2 x half> @llvm.vp.maximum.v2f16(<2 x half>, <2 x half>, <2 x i1>, i32)
define <2 x half> @vfmax_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -84,7 +84,7 @@ declare <4 x half> @llvm.vp.maximum.v4f16(<4 x half>, <4 x half>, <4 x i1>, i32)
define <4 x half> @vfmax_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -155,7 +155,7 @@ declare <8 x half> @llvm.vp.maximum.v8f16(<8 x half>, <8 x half>, <8 x i1>, i32)
define <8 x half> @vfmax_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -228,7 +228,7 @@ declare <16 x half> @llvm.vp.maximum.v16f16(<16 x half>, <16 x half>, <16 x i1>,
define <16 x half> @vfmax_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -303,7 +303,7 @@ declare <2 x float> @llvm.vp.maximum.v2f32(<2 x float>, <2 x float>, <2 x i1>, i
define <2 x float> @vfmax_vv_v2f32(<2 x float> %va, <2 x float> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -337,7 +337,7 @@ declare <4 x float> @llvm.vp.maximum.v4f32(<4 x float>, <4 x float>, <4 x i1>, i
define <4 x float> @vfmax_vv_v4f32(<4 x float> %va, <4 x float> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -371,7 +371,7 @@ declare <8 x float> @llvm.vp.maximum.v8f32(<8 x float>, <8 x float>, <8 x i1>, i
define <8 x float> @vfmax_vv_v8f32(<8 x float> %va, <8 x float> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -407,7 +407,7 @@ declare <16 x float> @llvm.vp.maximum.v16f32(<16 x float>, <16 x float>, <16 x i
define <16 x float> @vfmax_vv_v16f32(<16 x float> %va, <16 x float> %vb, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -443,7 +443,7 @@ declare <2 x double> @llvm.vp.maximum.v2f64(<2 x double>, <2 x double>, <2 x i1>
define <2 x double> @vfmax_vv_v2f64(<2 x double> %va, <2 x double> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -477,7 +477,7 @@ declare <4 x double> @llvm.vp.maximum.v4f64(<4 x double>, <4 x double>, <4 x i1>
define <4 x double> @vfmax_vv_v4f64(<4 x double> %va, <4 x double> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -513,7 +513,7 @@ declare <8 x double> @llvm.vp.maximum.v8f64(<8 x double>, <8 x double>, <8 x i1>
define <8 x double> @vfmax_vv_v8f64(<8 x double> %va, <8 x double> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -555,7 +555,7 @@ define <16 x double> @vfmax_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
index ca6b86f8325c4a..a3795581decd2f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
@@ -13,7 +13,7 @@ declare <2 x half> @llvm.vp.minimum.v2f16(<2 x half>, <2 x half>, <2 x i1>, i32)
define <2 x half> @vfmin_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -84,7 +84,7 @@ declare <4 x half> @llvm.vp.minimum.v4f16(<4 x half>, <4 x half>, <4 x i1>, i32)
define <4 x half> @vfmin_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -155,7 +155,7 @@ declare <8 x half> @llvm.vp.minimum.v8f16(<8 x half>, <8 x half>, <8 x i1>, i32)
define <8 x half> @vfmin_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -228,7 +228,7 @@ declare <16 x half> @llvm.vp.minimum.v16f16(<16 x half>, <16 x half>, <16 x i1>,
define <16 x half> @vfmin_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -303,7 +303,7 @@ declare <2 x float> @llvm.vp.minimum.v2f32(<2 x float>, <2 x float>, <2 x i1>, i
define <2 x float> @vfmin_vv_v2f32(<2 x float> %va, <2 x float> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -337,7 +337,7 @@ declare <4 x float> @llvm.vp.minimum.v4f32(<4 x float>, <4 x float>, <4 x i1>, i
define <4 x float> @vfmin_vv_v4f32(<4 x float> %va, <4 x float> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -371,7 +371,7 @@ declare <8 x float> @llvm.vp.minimum.v8f32(<8 x float>, <8 x float>, <8 x i1>, i
define <8 x float> @vfmin_vv_v8f32(<8 x float> %va, <8 x float> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -407,7 +407,7 @@ declare <16 x float> @llvm.vp.minimum.v16f32(<16 x float>, <16 x float>, <16 x i
define <16 x float> @vfmin_vv_v16f32(<16 x float> %va, <16 x float> %vb, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -443,7 +443,7 @@ declare <2 x double> @llvm.vp.minimum.v2f64(<2 x double>, <2 x double>, <2 x i1>
define <2 x double> @vfmin_vv_v2f64(<2 x double> %va, <2 x double> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -477,7 +477,7 @@ declare <4 x double> @llvm.vp.minimum.v4f64(<4 x double>, <4 x double>, <4 x i1>
define <4 x double> @vfmin_vv_v4f64(<4 x double> %va, <4 x double> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -513,7 +513,7 @@ declare <8 x double> @llvm.vp.minimum.v8f64(<8 x double>, <8 x double>, <8 x i1>
define <8 x double> @vfmin_vv_v8f64(<8 x double> %va, <8 x double> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -555,7 +555,7 @@ define <16 x double> @vfmin_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
index 9023d9732ef182..62e7e3b1099020 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
@@ -133,7 +133,7 @@ define <vscale x 2 x i32> @insert_nxv8i32_v4i32_0(<vscale x 2 x i32> %vec, <4 x
;
; VLS-LABEL: insert_nxv8i32_v4i32_0:
; VLS: # %bb.0:
-; VLS-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; VLS-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; VLS-NEXT: vmv1r.v v8, v9
; VLS-NEXT: ret
%v = call <vscale x 2 x i32> @llvm.vector.insert.nxv2i32.v4i32(<vscale x 2 x i32> %vec, <4 x i32> %subvec, i64 0)
@@ -144,7 +144,7 @@ define <vscale x 2 x i32> @insert_nxv8i32_v4i32_0(<vscale x 2 x i32> %vec, <4 x
define <4 x i32> @insert_v4i32_v4i32_0(<4 x i32> %vec, <4 x i32> %subvec) {
; CHECK-LABEL: insert_v4i32_v4i32_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%v = call <4 x i32> @llvm.vector.insert.v4i32.v4i32(<4 x i32> %vec, <4 x i32> %subvec, i64 0)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
index 722a0aff8d9e44..141d54cf585f28 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
@@ -556,13 +556,13 @@ define <4 x i8> @mgather_truemask_v4i8(<4 x ptr> %ptrs, <4 x i8> %passthru) {
define <4 x i8> @mgather_falsemask_v4i8(<4 x ptr> %ptrs, <4 x i8> %passthru) {
; RV32-LABEL: mgather_falsemask_v4i8:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i8:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -779,7 +779,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
; RV64ZVE32F-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB12_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB12_14: # %cond.load4
@@ -1252,13 +1252,13 @@ define <4 x i16> @mgather_truemask_v4i16(<4 x ptr> %ptrs, <4 x i16> %passthru) {
define <4 x i16> @mgather_falsemask_v4i16(<4 x ptr> %ptrs, <4 x i16> %passthru) {
; RV32-LABEL: mgather_falsemask_v4i16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i16:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -1486,7 +1486,7 @@ define <8 x i16> @mgather_baseidx_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB23_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB23_14: # %cond.load4
@@ -1635,7 +1635,7 @@ define <8 x i16> @mgather_baseidx_sext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB24_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB24_14: # %cond.load4
@@ -1788,7 +1788,7 @@ define <8 x i16> @mgather_baseidx_zext_v8i8_v8i16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB25_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB25_14: # %cond.load4
@@ -1935,7 +1935,7 @@ define <8 x i16> @mgather_baseidx_v8i16(ptr %base, <8 x i16> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB26_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB26_14: # %cond.load4
@@ -2300,13 +2300,13 @@ define <4 x i32> @mgather_truemask_v4i32(<4 x ptr> %ptrs, <4 x i32> %passthru) {
define <4 x i32> @mgather_falsemask_v4i32(<4 x ptr> %ptrs, <4 x i32> %passthru) {
; RV32-LABEL: mgather_falsemask_v4i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i32:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -2533,7 +2533,7 @@ define <8 x i32> @mgather_baseidx_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB35_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB35_14: # %cond.load4
@@ -2681,7 +2681,7 @@ define <8 x i32> @mgather_baseidx_sext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB36_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB36_14: # %cond.load4
@@ -2836,7 +2836,7 @@ define <8 x i32> @mgather_baseidx_zext_v8i8_v8i32(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB37_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB37_14: # %cond.load4
@@ -2989,7 +2989,7 @@ define <8 x i32> @mgather_baseidx_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <8 x i
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB38_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB38_14: # %cond.load4
@@ -3138,7 +3138,7 @@ define <8 x i32> @mgather_baseidx_sext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB39_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB39_14: # %cond.load4
@@ -3294,7 +3294,7 @@ define <8 x i32> @mgather_baseidx_zext_v8i16_v8i32(ptr %base, <8 x i16> %idxs, <
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB40_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB40_14: # %cond.load4
@@ -3440,7 +3440,7 @@ define <8 x i32> @mgather_baseidx_v8i32(ptr %base, <8 x i32> %idxs, <8 x i1> %m,
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB41_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB41_14: # %cond.load4
@@ -3792,13 +3792,13 @@ define <4 x i64> @mgather_truemask_v4i64(<4 x ptr> %ptrs, <4 x i64> %passthru) {
define <4 x i64> @mgather_falsemask_v4i64(<4 x ptr> %ptrs, <4 x i64> %passthru) {
; RV32V-LABEL: mgather_falsemask_v4i64:
; RV32V: # %bb.0:
-; RV32V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32V-NEXT: vmv2r.v v8, v10
; RV32V-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4i64:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv2r.v v8, v10
; RV64V-NEXT: ret
;
@@ -7085,13 +7085,13 @@ define <4 x bfloat> @mgather_truemask_v4bf16(<4 x ptr> %ptrs, <4 x bfloat> %pass
define <4 x bfloat> @mgather_falsemask_v4bf16(<4 x ptr> %ptrs, <4 x bfloat> %passthru) {
; RV32-LABEL: mgather_falsemask_v4bf16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4bf16:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -7319,7 +7319,7 @@ define <8 x bfloat> @mgather_baseidx_v8i8_v8bf16(ptr %base, <8 x i8> %idxs, <8 x
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB64_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB64_14: # %cond.load4
@@ -7468,7 +7468,7 @@ define <8 x bfloat> @mgather_baseidx_sext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB65_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB65_14: # %cond.load4
@@ -7621,7 +7621,7 @@ define <8 x bfloat> @mgather_baseidx_zext_v8i8_v8bf16(ptr %base, <8 x i8> %idxs,
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB66_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB66_14: # %cond.load4
@@ -7768,7 +7768,7 @@ define <8 x bfloat> @mgather_baseidx_v8bf16(ptr %base, <8 x i16> %idxs, <8 x i1>
; RV64ZVE32F-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-NEXT: .LBB67_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB67_14: # %cond.load4
@@ -8097,13 +8097,13 @@ define <4 x half> @mgather_truemask_v4f16(<4 x ptr> %ptrs, <4 x half> %passthru)
define <4 x half> @mgather_falsemask_v4f16(<4 x ptr> %ptrs, <4 x half> %passthru) {
; RV32-LABEL: mgather_falsemask_v4f16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4f16:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -8424,7 +8424,7 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFH-NEXT: .LBB74_13: # %else20
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
; RV64ZVE32F-ZVFH-NEXT: .LBB74_14: # %cond.load4
@@ -8548,7 +8548,7 @@ define <8 x half> @mgather_baseidx_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8 x i1
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_13: # %else20
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB74_14: # %cond.load4
@@ -8697,7 +8697,7 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFH-NEXT: .LBB75_13: # %else20
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
; RV64ZVE32F-ZVFH-NEXT: .LBB75_14: # %cond.load4
@@ -8821,7 +8821,7 @@ define <8 x half> @mgather_baseidx_sext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_13: # %else20
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB75_14: # %cond.load4
@@ -8974,7 +8974,7 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFH-NEXT: .LBB76_13: # %else20
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
; RV64ZVE32F-ZVFH-NEXT: .LBB76_14: # %cond.load4
@@ -9106,7 +9106,7 @@ define <8 x half> @mgather_baseidx_zext_v8i8_v8f16(ptr %base, <8 x i8> %idxs, <8
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_13: # %else20
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB76_14: # %cond.load4
@@ -9253,7 +9253,7 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFH-NEXT: .LBB77_13: # %else20
-; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFH-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFH-NEXT: ret
; RV64ZVE32F-ZVFH-NEXT: .LBB77_14: # %cond.load4
@@ -9369,7 +9369,7 @@ define <8 x half> @mgather_baseidx_v8f16(ptr %base, <8 x i16> %idxs, <8 x i1> %m
; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vslideup.vi v9, v8, 7
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_13: # %else20
-; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-ZVFHMIN-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-ZVFHMIN-NEXT: ret
; RV64ZVE32F-ZVFHMIN-NEXT: .LBB77_14: # %cond.load4
@@ -9606,13 +9606,13 @@ define <4 x float> @mgather_truemask_v4f32(<4 x ptr> %ptrs, <4 x float> %passthr
define <4 x float> @mgather_falsemask_v4f32(<4 x ptr> %ptrs, <4 x float> %passthru) {
; RV32-LABEL: mgather_falsemask_v4f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v9
; RV32-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4f32:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv1r.v v8, v10
; RV64V-NEXT: ret
;
@@ -9839,7 +9839,7 @@ define <8 x float> @mgather_baseidx_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <8 x i
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB84_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB84_14: # %cond.load4
@@ -9987,7 +9987,7 @@ define <8 x float> @mgather_baseidx_sext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB85_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB85_14: # %cond.load4
@@ -10142,7 +10142,7 @@ define <8 x float> @mgather_baseidx_zext_v8i8_v8f32(ptr %base, <8 x i8> %idxs, <
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB86_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB86_14: # %cond.load4
@@ -10295,7 +10295,7 @@ define <8 x float> @mgather_baseidx_v8i16_v8f32(ptr %base, <8 x i16> %idxs, <8 x
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB87_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB87_14: # %cond.load4
@@ -10444,7 +10444,7 @@ define <8 x float> @mgather_baseidx_sext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB88_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB88_14: # %cond.load4
@@ -10600,7 +10600,7 @@ define <8 x float> @mgather_baseidx_zext_v8i16_v8f32(ptr %base, <8 x i16> %idxs,
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB89_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB89_14: # %cond.load4
@@ -10746,7 +10746,7 @@ define <8 x float> @mgather_baseidx_v8f32(ptr %base, <8 x i32> %idxs, <8 x i1> %
; RV64ZVE32F-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 7
; RV64ZVE32F-NEXT: .LBB90_13: # %else20
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB90_14: # %cond.load4
@@ -11056,13 +11056,13 @@ define <4 x double> @mgather_truemask_v4f64(<4 x ptr> %ptrs, <4 x double> %passt
define <4 x double> @mgather_falsemask_v4f64(<4 x ptr> %ptrs, <4 x double> %passthru) {
; RV32V-LABEL: mgather_falsemask_v4f64:
; RV32V: # %bb.0:
-; RV32V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32V-NEXT: vmv2r.v v8, v10
; RV32V-NEXT: ret
;
; RV64V-LABEL: mgather_falsemask_v4f64:
; RV64V: # %bb.0:
-; RV64V-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64V-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64V-NEXT: vmv2r.v v8, v10
; RV64V-NEXT: ret
;
@@ -13623,7 +13623,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
; RV64ZVE32F-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v9, v8, 15
; RV64ZVE32F-NEXT: .LBB107_24: # %else44
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv1r.v v8, v9
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB107_25: # %cond.load4
@@ -14010,7 +14010,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
; RV64ZVE32F-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; RV64ZVE32F-NEXT: vslideup.vi v10, v8, 31
; RV64ZVE32F-NEXT: .LBB108_48: # %else92
-; RV64ZVE32F-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64ZVE32F-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64ZVE32F-NEXT: vmv2r.v v8, v10
; RV64ZVE32F-NEXT: ret
; RV64ZVE32F-NEXT: .LBB108_49: # %cond.load4
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
index e5f3e22361f635..69903d77084bf4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
@@ -318,7 +318,7 @@ define <128 x i16> @masked_load_v128i16(ptr %a, <128 x i1> %mask) {
define <256 x i8> @masked_load_v256i8(ptr %a, <256 x i1> %mask) {
; CHECK-LABEL: masked_load_v256i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v8
; CHECK-NEXT: li a1, 128
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
index e614307f35dcc0..cecb47d16ce4b4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
@@ -135,7 +135,7 @@ declare <16 x half> @llvm.vp.nearbyint.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_nearbyint_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI6_0)
; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -264,7 +264,7 @@ declare <8 x float> @llvm.vp.nearbyint.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_nearbyint_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -309,7 +309,7 @@ declare <16 x float> @llvm.vp.nearbyint.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_nearbyint_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -396,7 +396,7 @@ declare <4 x double> @llvm.vp.nearbyint.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_nearbyint_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -441,7 +441,7 @@ declare <8 x double> @llvm.vp.nearbyint.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_nearbyint_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -486,7 +486,7 @@ declare <15 x double> @llvm.vp.nearbyint.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_nearbyint_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -531,7 +531,7 @@ declare <16 x double> @llvm.vp.nearbyint.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_nearbyint_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
index ddb16fe11719a6..05b4150e8c328f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
@@ -23,7 +23,7 @@ declare i1 @llvm.vp.reduce.or.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_or_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -40,7 +40,7 @@ declare i1 @llvm.vp.reduce.xor.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_xor_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -73,7 +73,7 @@ declare i1 @llvm.vp.reduce.or.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_or_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -90,7 +90,7 @@ declare i1 @llvm.vp.reduce.xor.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_xor_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -123,7 +123,7 @@ declare i1 @llvm.vp.reduce.or.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_or_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -140,7 +140,7 @@ declare i1 @llvm.vp.reduce.xor.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_xor_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -173,7 +173,7 @@ declare i1 @llvm.vp.reduce.or.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_or_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -190,7 +190,7 @@ declare i1 @llvm.vp.reduce.xor.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_xor_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -239,7 +239,7 @@ declare i1 @llvm.vp.reduce.and.v256i1(i1, <256 x i1>, <256 x i1>, i32)
define zeroext i1 @vpreduce_and_v256i1(i1 zeroext %s, <256 x i1> %v, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_and_v256i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v9
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: li a3, 128
@@ -274,7 +274,7 @@ declare i1 @llvm.vp.reduce.or.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_or_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -291,7 +291,7 @@ declare i1 @llvm.vp.reduce.xor.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_xor_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -308,7 +308,7 @@ declare i1 @llvm.vp.reduce.add.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_add_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -325,7 +325,7 @@ declare i1 @llvm.vp.reduce.add.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_add_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -342,7 +342,7 @@ declare i1 @llvm.vp.reduce.add.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_add_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -359,7 +359,7 @@ declare i1 @llvm.vp.reduce.add.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_add_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -376,7 +376,7 @@ declare i1 @llvm.vp.reduce.add.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_add_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -505,7 +505,7 @@ declare i1 @llvm.vp.reduce.smin.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_smin_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -522,7 +522,7 @@ declare i1 @llvm.vp.reduce.smin.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_smin_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -539,7 +539,7 @@ declare i1 @llvm.vp.reduce.smin.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_smin_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -556,7 +556,7 @@ declare i1 @llvm.vp.reduce.smin.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_smin_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -573,7 +573,7 @@ declare i1 @llvm.vp.reduce.smin.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_smin_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -590,7 +590,7 @@ declare i1 @llvm.vp.reduce.smin.v32i1(i1, <32 x i1>, <32 x i1>, i32)
define zeroext i1 @vpreduce_smin_v32i1(i1 zeroext %s, <32 x i1> %v, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -607,7 +607,7 @@ declare i1 @llvm.vp.reduce.smin.v64i1(i1, <64 x i1>, <64 x i1>, i32)
define zeroext i1 @vpreduce_smin_v64i1(i1 zeroext %s, <64 x i1> %v, <64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -624,7 +624,7 @@ declare i1 @llvm.vp.reduce.umax.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_umax_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -641,7 +641,7 @@ declare i1 @llvm.vp.reduce.umax.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_umax_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -658,7 +658,7 @@ declare i1 @llvm.vp.reduce.umax.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_umax_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -675,7 +675,7 @@ declare i1 @llvm.vp.reduce.umax.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_umax_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -692,7 +692,7 @@ declare i1 @llvm.vp.reduce.umax.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_umax_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -709,7 +709,7 @@ declare i1 @llvm.vp.reduce.umax.v32i1(i1, <32 x i1>, <32 x i1>, i32)
define zeroext i1 @vpreduce_umax_v32i1(i1 zeroext %s, <32 x i1> %v, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -726,7 +726,7 @@ declare i1 @llvm.vp.reduce.umax.v64i1(i1, <64 x i1>, <64 x i1>, i32)
define zeroext i1 @vpreduce_umax_v64i1(i1 zeroext %s, <64 x i1> %v, <64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
index b45d18d01f67a2..f92bd4421e769e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
@@ -123,7 +123,7 @@ declare <16 x half> @llvm.vp.rint.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_rint_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI6_0)
; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -240,7 +240,7 @@ declare <8 x float> @llvm.vp.rint.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_rint_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -281,7 +281,7 @@ declare <16 x float> @llvm.vp.rint.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_rint_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -360,7 +360,7 @@ declare <4 x double> @llvm.vp.rint.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_rint_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -401,7 +401,7 @@ declare <8 x double> @llvm.vp.rint.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_rint_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -442,7 +442,7 @@ declare <15 x double> @llvm.vp.rint.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_rint_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -483,7 +483,7 @@ declare <16 x double> @llvm.vp.rint.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_rint_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
index 0c23a71b9af3f4..e94734d66f1d56 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
@@ -261,7 +261,7 @@ declare <16 x half> @llvm.vp.round.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_round_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -432,7 +432,7 @@ declare <8 x float> @llvm.vp.round.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_round_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -477,7 +477,7 @@ declare <16 x float> @llvm.vp.round.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_round_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -564,7 +564,7 @@ declare <4 x double> @llvm.vp.round.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_round_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -609,7 +609,7 @@ declare <8 x double> @llvm.vp.round.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_round_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -654,7 +654,7 @@ declare <15 x double> @llvm.vp.round.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_round_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -699,7 +699,7 @@ declare <16 x double> @llvm.vp.round.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_round_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
index eed410343999dd..8cbf21c07e7541 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
@@ -261,7 +261,7 @@ declare <16 x half> @llvm.vp.roundeven.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_roundeven_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -432,7 +432,7 @@ declare <8 x float> @llvm.vp.roundeven.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_roundeven_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -477,7 +477,7 @@ declare <16 x float> @llvm.vp.roundeven.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_roundeven_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -564,7 +564,7 @@ declare <4 x double> @llvm.vp.roundeven.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_roundeven_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -609,7 +609,7 @@ declare <8 x double> @llvm.vp.roundeven.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_roundeven_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -654,7 +654,7 @@ declare <15 x double> @llvm.vp.roundeven.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_roundeven_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -699,7 +699,7 @@ declare <16 x double> @llvm.vp.roundeven.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_roundeven_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
index fb3015e23a9495..98071d83e0e522 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
@@ -261,7 +261,7 @@ declare <16 x half> @llvm.vp.roundtozero.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_roundtozero_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
@@ -432,7 +432,7 @@ declare <8 x float> @llvm.vp.roundtozero.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_roundtozero_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -477,7 +477,7 @@ declare <16 x float> @llvm.vp.roundtozero.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_roundtozero_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -564,7 +564,7 @@ declare <4 x double> @llvm.vp.roundtozero.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_roundtozero_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
@@ -609,7 +609,7 @@ declare <8 x double> @llvm.vp.roundtozero.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_roundtozero_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
@@ -654,7 +654,7 @@ declare <15 x double> @llvm.vp.roundtozero.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_roundtozero_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
@@ -699,7 +699,7 @@ declare <16 x double> @llvm.vp.roundtozero.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_roundtozero_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
index 11b0edcd77bb6b..69d6ffa9f300c0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
@@ -598,7 +598,7 @@ define <256 x i1> @icmp_eq_vv_v256i8(<256 x i8> %va, <256 x i8> %vb, <256 x i1>
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: slli a1, a1, 3
@@ -649,7 +649,7 @@ define <256 x i1> @icmp_eq_vv_v256i8(<256 x i8> %va, <256 x i8> %vb, <256 x i1>
define <256 x i1> @icmp_eq_vx_v256i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_v256i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
@@ -679,7 +679,7 @@ define <256 x i1> @icmp_eq_vx_v256i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 z
define <256 x i1> @icmp_eq_vx_swap_v256i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_eq_vx_swap_v256i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll
index 0ae55f035cc9bd..f2353e7d028bda 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-concat.ll
@@ -164,7 +164,7 @@ define <16 x i32> @concat_8xv2i32(<2 x i32> %a, <2 x i32> %b, <2 x i32> %c, <2 x
define <32 x i32> @concat_2xv16i32(<16 x i32> %a, <16 x i32> %b) {
; VLA-LABEL: concat_2xv16i32:
; VLA: # %bb.0:
-; VLA-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; VLA-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; VLA-NEXT: vmv4r.v v16, v12
; VLA-NEXT: li a0, 32
; VLA-NEXT: vsetvli zero, a0, e32, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll
index d560977d25e085..cadee8acf27d65 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-exact-vlen.ll
@@ -108,7 +108,7 @@ define <4 x i64> @m2_splat_into_identity(<4 x i64> %v1) vscale_range(2,2) {
define <4 x i64> @m2_broadcast_i128(<4 x i64> %v1) vscale_range(2,2) {
; CHECK-LABEL: m2_broadcast_i128:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: ret
%res = shufflevector <4 x i64> %v1, <4 x i64> poison, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
@@ -118,7 +118,7 @@ define <4 x i64> @m2_broadcast_i128(<4 x i64> %v1) vscale_range(2,2) {
define <8 x i64> @m4_broadcast_i128(<8 x i64> %v1) vscale_range(2,2) {
; CHECK-LABEL: m4_broadcast_i128:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vmv1r.v v11, v8
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll
index 54185a64202485..49f6acf9ba8c98 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-vslide1up.ll
@@ -415,7 +415,7 @@ define <4 x i8> @vslide1up_4xi8_neg_incorrect_insert3(<4 x i8> %v, i8 %b) {
define <2 x i8> @vslide1up_4xi8_neg_length_changing(<4 x i8> %v, i8 %b) {
; CHECK-LABEL: vslide1up_4xi8_neg_length_changing:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 4, e8, m1, tu, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, tu, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetivli zero, 2, e8, mf8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
index e6edbe6afb9a7b..206ccab5523de0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
@@ -62,7 +62,7 @@ define void @gather_masked(ptr noalias nocapture %A, ptr noalias nocapture reado
; CHECK-NEXT: li a4, 5
; CHECK-NEXT: .LBB1_1: # %vector.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetvli zero, a3, e8, m1, ta, mu
; CHECK-NEXT: vlse8.v v9, (a1), a4, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
index 81671c21e15b46..4b7f82f94f5e4d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
@@ -542,7 +542,7 @@ declare <3 x double> @llvm.experimental.vp.strided.load.v3f64.p0.i32(ptr, i32, <
define <32 x double> @strided_vpload_v32f64(ptr %ptr, i32 signext %stride, <32 x i1> %m, i32 zeroext %evl) nounwind {
; CHECK-LABEL: strided_vpload_v32f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: li a4, 16
; CHECK-NEXT: mv a3, a2
@@ -599,7 +599,7 @@ declare <32 x double> @llvm.experimental.vp.strided.load.v32f64.p0.i32(ptr, i32,
define <33 x double> @strided_load_v33f64(ptr %ptr, i64 %stride, <33 x i1> %mask, i32 zeroext %evl) {
; CHECK-RV32-LABEL: strided_load_v33f64:
; CHECK-RV32: # %bb.0:
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v8, v0
; CHECK-RV32-NEXT: li a5, 32
; CHECK-RV32-NEXT: mv a3, a4
@@ -650,7 +650,7 @@ define <33 x double> @strided_load_v33f64(ptr %ptr, i64 %stride, <33 x i1> %mask
;
; CHECK-RV64-LABEL: strided_load_v33f64:
; CHECK-RV64: # %bb.0:
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v8, v0
; CHECK-RV64-NEXT: li a5, 32
; CHECK-RV64-NEXT: mv a4, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
index 488f364780395b..6d9f69f436fc41 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
@@ -105,7 +105,7 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV32-SLOW-NEXT: vsetivli zero, 2, e16, mf4, ta, ma
; RV32-SLOW-NEXT: vslideup.vi v9, v8, 1
; RV32-SLOW-NEXT: .LBB4_4: # %else2
-; RV32-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-SLOW-NEXT: vmv1r.v v8, v9
; RV32-SLOW-NEXT: ret
;
@@ -139,7 +139,7 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
; RV64-SLOW-NEXT: vsetivli zero, 2, e16, mf4, ta, ma
; RV64-SLOW-NEXT: vslideup.vi v9, v8, 1
; RV64-SLOW-NEXT: .LBB4_4: # %else2
-; RV64-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-SLOW-NEXT: vmv1r.v v8, v9
; RV64-SLOW-NEXT: ret
;
@@ -191,7 +191,7 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV32-SLOW-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; RV32-SLOW-NEXT: vslideup.vi v9, v8, 1
; RV32-SLOW-NEXT: .LBB5_4: # %else2
-; RV32-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-SLOW-NEXT: vmv1r.v v8, v9
; RV32-SLOW-NEXT: ret
;
@@ -223,7 +223,7 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
; RV64-SLOW-NEXT: vmv.s.x v8, a0
; RV64-SLOW-NEXT: vslideup.vi v9, v8, 1
; RV64-SLOW-NEXT: .LBB5_4: # %else2
-; RV64-SLOW-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-SLOW-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-SLOW-NEXT: vmv1r.v v8, v9
; RV64-SLOW-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll
index 9e9989590e4eb5..7ee8179acfdb96 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vadd-vp.ll
@@ -363,7 +363,7 @@ declare <256 x i8> @llvm.vp.add.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vadd_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vadd_vi_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll
index 5ec18aec2b9a60..fec54b36042fa4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmax-vp.ll
@@ -267,7 +267,7 @@ declare <256 x i8> @llvm.vp.smax.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vmax_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmax_vx_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll
index d47e17077bf902..7ca0dbd9adffc3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmaxu-vp.ll
@@ -266,7 +266,7 @@ declare <256 x i8> @llvm.vp.umax.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vmaxu_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmaxu_vx_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll
index 23d292a5fd77d2..ea75742ca6e43e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vmin-vp.ll
@@ -267,7 +267,7 @@ declare <256 x i8> @llvm.vp.smin.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vmin_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vmin_vx_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll
index b400015856f84a..f4f54db64018d8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vminu-vp.ll
@@ -266,7 +266,7 @@ declare <256 x i8> @llvm.vp.umin.v258i8(<256 x i8>, <256 x i8>, <256 x i1>, i32)
define <256 x i8> @vminu_vx_v258i8(<256 x i8> %va, i8 %b, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vminu_vx_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a3, 128
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll
index a971b469df0a2a..6c9989775f7902 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpload.ll
@@ -394,7 +394,7 @@ declare <33 x double> @llvm.vp.load.v33f64.p0(ptr, <33 x i1>, i32)
define <33 x double> @vpload_v33f64(ptr %ptr, <33 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpload_v33f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: li a4, 32
; CHECK-NEXT: mv a3, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll
index ede197c11eb916..7afd31fdd663cd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsadd-vp.ll
@@ -372,7 +372,7 @@ declare <256 x i8> @llvm.vp.sadd.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vsadd_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsadd_vi_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll
index 13dc0702c16aa5..f61b112fd80243 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vsaddu-vp.ll
@@ -368,7 +368,7 @@ declare <256 x i8> @llvm.vp.uadd.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vsaddu_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vsaddu_vi_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
index 07b08e2518ea44..5d407caf71514d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
@@ -163,7 +163,7 @@ define <256 x i8> @select_v256i8(<256 x i1> %a, <256 x i8> %b, <256 x i8> %c, i3
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v6, v8
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: li a2, 128
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll
index 8a199770c163dc..6ddf2e464750e9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssub-vp.ll
@@ -384,7 +384,7 @@ declare <256 x i8> @llvm.vp.ssub.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vssub_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssub_vi_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: addi a3, a1, -128
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll
index 37c5e397569277..c4035938947940 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vssubu-vp.ll
@@ -379,7 +379,7 @@ declare <256 x i8> @llvm.vp.usub.sat.v258i8(<256 x i8>, <256 x i8>, <256 x i1>,
define <256 x i8> @vssubu_vi_v258i8(<256 x i8> %va, <256 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssubu_vi_v258i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: li a2, 128
; CHECK-NEXT: addi a3, a1, -128
diff --git a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
index ff12aaac3a983d..9ed54b2ba965b3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
@@ -649,7 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.floor.nxv8f16(<vscale x 8 x half>, <vscale
define <vscale x 8 x half> @vp_floor_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -736,7 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.floor.nxv16f16(<vscale x 16 x half>, <vsca
define <vscale x 16 x half> @vp_floor_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -823,7 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.floor.nxv32f16(<vscale x 32 x half>, <vsca
define <vscale x 32 x half> @vp_floor_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -1071,7 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.floor.nxv4f32(<vscale x 4 x float>, <vscal
define <vscale x 4 x float> @vp_floor_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1116,7 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.floor.nxv8f32(<vscale x 8 x float>, <vscal
define <vscale x 8 x float> @vp_floor_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1161,7 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.floor.nxv16f32(<vscale x 16 x float>, <vs
define <vscale x 16 x float> @vp_floor_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1248,7 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.floor.nxv2f64(<vscale x 2 x double>, <vsc
define <vscale x 2 x double> @vp_floor_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1293,7 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.floor.nxv4f64(<vscale x 4 x double>, <vsc
define <vscale x 4 x double> @vp_floor_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1338,7 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.floor.nxv7f64(<vscale x 7 x double>, <vsc
define <vscale x 7 x double> @vp_floor_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1383,7 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.floor.nxv8f64(<vscale x 8 x double>, <vsc
define <vscale x 8 x double> @vp_floor_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
index 025925cbce4b32..c4041fc372de45 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
@@ -569,7 +569,7 @@ declare <vscale x 1 x half> @llvm.vp.maximum.nxv1f16(<vscale x 1 x half>, <vscal
define <vscale x 1 x half> @vfmax_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv1f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -640,7 +640,7 @@ declare <vscale x 2 x half> @llvm.vp.maximum.nxv2f16(<vscale x 2 x half>, <vscal
define <vscale x 2 x half> @vfmax_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -711,7 +711,7 @@ declare <vscale x 4 x half> @llvm.vp.maximum.nxv4f16(<vscale x 4 x half>, <vscal
define <vscale x 4 x half> @vfmax_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -784,7 +784,7 @@ declare <vscale x 8 x half> @llvm.vp.maximum.nxv8f16(<vscale x 8 x half>, <vscal
define <vscale x 8 x half> @vfmax_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -859,7 +859,7 @@ declare <vscale x 16 x half> @llvm.vp.maximum.nxv16f16(<vscale x 16 x half>, <vs
define <vscale x 16 x half> @vfmax_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFH-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -969,7 +969,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: slli a1, a1, 3
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
; ZVFH-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -1286,7 +1286,7 @@ declare <vscale x 1 x float> @llvm.vp.maximum.nxv1f32(<vscale x 1 x float>, <vsc
define <vscale x 1 x float> @vfmax_vv_nxv1f32(<vscale x 1 x float> %va, <vscale x 1 x float> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1320,7 +1320,7 @@ declare <vscale x 2 x float> @llvm.vp.maximum.nxv2f32(<vscale x 2 x float>, <vsc
define <vscale x 2 x float> @vfmax_vv_nxv2f32(<vscale x 2 x float> %va, <vscale x 2 x float> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x float> @llvm.vp.maximum.nxv4f32(<vscale x 4 x float>, <vsc
define <vscale x 4 x float> @vfmax_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x float> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1390,7 +1390,7 @@ declare <vscale x 8 x float> @llvm.vp.maximum.nxv8f32(<vscale x 8 x float>, <vsc
define <vscale x 8 x float> @vfmax_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x float> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1426,7 +1426,7 @@ declare <vscale x 1 x double> @llvm.vp.maximum.nxv1f64(<vscale x 1 x double>, <v
define <vscale x 1 x double> @vfmax_vv_nxv1f64(<vscale x 1 x double> %va, <vscale x 1 x double> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1460,7 +1460,7 @@ declare <vscale x 2 x double> @llvm.vp.maximum.nxv2f64(<vscale x 2 x double>, <v
define <vscale x 2 x double> @vfmax_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x double> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1496,7 +1496,7 @@ declare <vscale x 4 x double> @llvm.vp.maximum.nxv4f64(<vscale x 4 x double>, <v
define <vscale x 4 x double> @vfmax_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x double> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1538,7 +1538,7 @@ define <vscale x 8 x double> @vfmax_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
index 7bc8c3d2fdc57c..467156ab024a6f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
@@ -569,7 +569,7 @@ declare <vscale x 1 x half> @llvm.vp.minimum.nxv1f16(<vscale x 1 x half>, <vscal
define <vscale x 1 x half> @vfmin_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv1f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -640,7 +640,7 @@ declare <vscale x 2 x half> @llvm.vp.minimum.nxv2f16(<vscale x 2 x half>, <vscal
define <vscale x 2 x half> @vfmin_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -711,7 +711,7 @@ declare <vscale x 4 x half> @llvm.vp.minimum.nxv4f16(<vscale x 4 x half>, <vscal
define <vscale x 4 x half> @vfmin_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -784,7 +784,7 @@ declare <vscale x 8 x half> @llvm.vp.minimum.nxv8f16(<vscale x 8 x half>, <vscal
define <vscale x 8 x half> @vfmin_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -859,7 +859,7 @@ declare <vscale x 16 x half> @llvm.vp.minimum.nxv16f16(<vscale x 16 x half>, <vs
define <vscale x 16 x half> @vfmin_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFH-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -969,7 +969,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: slli a1, a1, 3
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
; ZVFH-NEXT: vmfeq.vv v25, v8, v8, v0.t
@@ -1286,7 +1286,7 @@ declare <vscale x 1 x float> @llvm.vp.minimum.nxv1f32(<vscale x 1 x float>, <vsc
define <vscale x 1 x float> @vfmin_vv_nxv1f32(<vscale x 1 x float> %va, <vscale x 1 x float> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1320,7 +1320,7 @@ declare <vscale x 2 x float> @llvm.vp.minimum.nxv2f32(<vscale x 2 x float>, <vsc
define <vscale x 2 x float> @vfmin_vv_nxv2f32(<vscale x 2 x float> %va, <vscale x 2 x float> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x float> @llvm.vp.minimum.nxv4f32(<vscale x 4 x float>, <vsc
define <vscale x 4 x float> @vfmin_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x float> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1390,7 +1390,7 @@ declare <vscale x 8 x float> @llvm.vp.minimum.nxv8f32(<vscale x 8 x float>, <vsc
define <vscale x 8 x float> @vfmin_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x float> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1426,7 +1426,7 @@ declare <vscale x 1 x double> @llvm.vp.minimum.nxv1f64(<vscale x 1 x double>, <v
define <vscale x 1 x double> @vfmin_vv_nxv1f64(<vscale x 1 x double> %va, <vscale x 1 x double> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
@@ -1460,7 +1460,7 @@ declare <vscale x 2 x double> @llvm.vp.minimum.nxv2f64(<vscale x 2 x double>, <v
define <vscale x 2 x double> @vfmin_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x double> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
@@ -1496,7 +1496,7 @@ declare <vscale x 4 x double> @llvm.vp.minimum.nxv4f64(<vscale x 4 x double>, <v
define <vscale x 4 x double> @vfmin_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x double> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
@@ -1538,7 +1538,7 @@ define <vscale x 8 x double> @vfmin_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll b/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
index 0514e0e6949159..5e657a93ec0d63 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
@@ -18,7 +18,7 @@ define i32 @test(i32 %size, ptr %add.ptr, i64 %const) {
; RV32-NEXT: .LBB0_1: # %for.body
; RV32-NEXT: # =>This Inner Loop Header: Depth=1
; RV32-NEXT: th.lrb a0, a1, a0, 0
-; RV32-NEXT: vsetivli zero, 8, e8, m1, tu, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, tu, ma
; RV32-NEXT: vmv1r.v v9, v8
; RV32-NEXT: vmv.s.x v9, a0
; RV32-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
@@ -45,7 +45,7 @@ define i32 @test(i32 %size, ptr %add.ptr, i64 %const) {
; RV64-NEXT: # =>This Inner Loop Header: Depth=1
; RV64-NEXT: sext.w a0, a0
; RV64-NEXT: th.lrb a0, a1, a0, 0
-; RV64-NEXT: vsetivli zero, 8, e8, m1, tu, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, tu, ma
; RV64-NEXT: vmv1r.v v9, v8
; RV64-NEXT: vmv.s.x v9, a0
; RV64-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
index 2078670f330e28..9cc9306807cc4c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
@@ -703,7 +703,7 @@ define <vscale x 16 x i32> @fshl_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re32.v v24, (a0)
; CHECK-NEXT: li a0, 31
@@ -883,7 +883,7 @@ define <vscale x 7 x i64> @fshl_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re64.v v24, (a0)
; CHECK-NEXT: li a0, 63
@@ -955,7 +955,7 @@ define <vscale x 8 x i64> @fshl_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re64.v v24, (a0)
; CHECK-NEXT: li a0, 63
diff --git a/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll b/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll
index fa14a1f1d186b0..daacc853963fc6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/inline-asm.ll
@@ -365,13 +365,13 @@ entry:
define <vscale x 4 x i8> @test_specify_reg_mf2(<vscale x 4 x i8> %in, <vscale x 4 x i8> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_mf2:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v2, v9
; CHECK-NEXT: vmv1r.v v1, v8
; CHECK-NEXT: #APP
; CHECK-NEXT: vadd.vv v0, v1, v2
; CHECK-NEXT: #NO_APP
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: ret
entry:
@@ -382,13 +382,13 @@ entry:
define <vscale x 8 x i8> @test_specify_reg_m1(<vscale x 8 x i8> %in, <vscale x 8 x i8> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_m1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v2, v9
; CHECK-NEXT: vmv1r.v v1, v8
; CHECK-NEXT: #APP
; CHECK-NEXT: vadd.vv v0, v1, v2
; CHECK-NEXT: #NO_APP
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: ret
entry:
@@ -399,13 +399,13 @@ entry:
define <vscale x 16 x i8> @test_specify_reg_m2(<vscale x 16 x i8> %in, <vscale x 16 x i8> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_m2:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v4, v10
; CHECK-NEXT: vmv2r.v v2, v8
; CHECK-NEXT: #APP
; CHECK-NEXT: vadd.vv v0, v2, v4
; CHECK-NEXT: #NO_APP
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v0
; CHECK-NEXT: ret
entry:
@@ -416,7 +416,7 @@ entry:
define <vscale x 1 x i1> @test_specify_reg_mask(<vscale x 1 x i1> %in, <vscale x 1 x i1> %in2) nounwind {
; CHECK-LABEL: test_specify_reg_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v2, v8
; CHECK-NEXT: vmv1r.v v1, v0
; CHECK-NEXT: #APP
diff --git a/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll
index 86a5a4878c6569..ca9cec921b3cdf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/insert-subvector.ll
@@ -5,7 +5,7 @@
define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_0(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv4i32_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv4i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec, i64 0)
@@ -15,7 +15,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_0(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_4(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv4i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv4i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 4 x i32> %subvec, i64 4)
@@ -25,7 +25,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv4i32_4(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_0(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 0)
@@ -35,7 +35,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_0(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 2)
@@ -45,7 +45,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_2(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 4)
@@ -55,7 +55,7 @@ define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_4(<vscale x 8 x i32> %vec, <vs
define <vscale x 8 x i32> @insert_nxv8i32_nxv2i32_6(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv8i32_nxv2i32_6:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: ret
%v = call <vscale x 8 x i32> @llvm.vector.insert.nxv2i32.nxv8i32(<vscale x 8 x i32> %vec, <vscale x 2 x i32> %subvec, i64 6)
@@ -92,7 +92,7 @@ define <vscale x 4 x i8> @insert_nxv1i8_nxv4i8_3(<vscale x 4 x i8> %vec, <vscale
define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_0(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv8i32_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv8i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec, i64 0)
@@ -102,7 +102,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_0(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_8(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv8i32_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v12, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv8i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 8 x i32> %subvec, i64 8)
@@ -112,7 +112,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv8i32_8(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_0(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 0)
@@ -122,7 +122,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_0(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v10, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 4)
@@ -132,7 +132,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_4(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v12, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 8)
@@ -142,7 +142,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_8(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_12(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv4i32_12:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v14, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv4i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 4 x i32> %subvec, i64 12)
@@ -152,7 +152,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv4i32_12(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_0(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 0)
@@ -162,7 +162,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_0(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 2)
@@ -172,7 +172,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_2(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_4:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 4)
@@ -182,7 +182,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_4(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_6:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 6)
@@ -192,7 +192,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_6(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 8)
@@ -202,7 +202,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_8(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_10:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 10)
@@ -212,7 +212,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_10(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_12:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 12)
@@ -222,7 +222,7 @@ define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_12(<vscale x 16 x i32> %vec,
define <vscale x 16 x i32> @insert_nxv16i32_nxv2i32_14(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_nxv16i32_nxv2i32_14:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v15, v16
; CHECK-NEXT: ret
%v = call <vscale x 16 x i32> @llvm.vector.insert.nxv2i32.nxv16i32(<vscale x 16 x i32> %vec, <vscale x 2 x i32> %subvec, i64 14)
@@ -532,7 +532,7 @@ define <vscale x 2 x i64> @insert_nxv2i64_nxv3i64(<3 x i64> %sv) #0 {
define <vscale x 8 x i32> @insert_insert_combine(<2 x i32> %subvec) {
; CHECK-LABEL: insert_insert_combine:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: ret
%inner = call <vscale x 4 x i32> @llvm.vector.insert.nxv4i32.v2i32(<vscale x 4 x i32> undef, <2 x i32> %subvec, i64 0)
@@ -545,7 +545,7 @@ define <vscale x 8 x i32> @insert_insert_combine(<2 x i32> %subvec) {
define <vscale x 8 x i32> @insert_insert_combine2(<vscale x 2 x i32> %subvec) {
; CHECK-LABEL: insert_insert_combine2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: ret
%inner = call <vscale x 4 x i32> @llvm.vector.insert.nxv2i32.nxv4i32(<vscale x 4 x i32> undef, <vscale x 2 x i32> %subvec, i64 0)
diff --git a/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll b/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
index 12bddded9a4075..e49484bd7e6560 100644
--- a/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
@@ -1288,7 +1288,7 @@ declare <vscale x 1 x i8> @llvm.riscv.viota.mask.nxv1i8(
define <vscale x 1 x i8> @intrinsic_viota_mask_m_nxv1i8_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_viota_mask_m_nxv1i8_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -1313,7 +1313,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsbf.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -1445,7 +1445,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsbf.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
index 727908b67c6dd9..9ee2324f615dd8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
@@ -221,13 +221,13 @@ define <vscale x 4 x i8> @mgather_truemask_nxv4i8(<vscale x 4 x ptr> %ptrs, <vsc
define <vscale x 4 x i8> @mgather_falsemask_nxv4i8(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i8> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4i8:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4i8:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x i8> @llvm.masked.gather.nxv4i8.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 1, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i8> %passthru)
@@ -444,13 +444,13 @@ define <vscale x 4 x i16> @mgather_truemask_nxv4i16(<vscale x 4 x ptr> %ptrs, <v
define <vscale x 4 x i16> @mgather_falsemask_nxv4i16(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i16> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4i16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x i16> @llvm.masked.gather.nxv4i16.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 2, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i16> %passthru)
@@ -690,13 +690,13 @@ define <vscale x 4 x i32> @mgather_truemask_nxv4i32(<vscale x 4 x ptr> %ptrs, <v
define <vscale x 4 x i32> @mgather_falsemask_nxv4i32(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i32> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x i32> @llvm.masked.gather.nxv4i32.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 4, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i32> %passthru)
@@ -955,7 +955,7 @@ define <vscale x 4 x i64> @mgather_truemask_nxv4i64(<vscale x 4 x ptr> %ptrs, <v
define <vscale x 4 x i64> @mgather_falsemask_nxv4i64(<vscale x 4 x ptr> %ptrs, <vscale x 4 x i64> %passthru) {
; CHECK-LABEL: mgather_falsemask_nxv4i64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 4 x i64> @llvm.masked.gather.nxv4i64.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 8, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i64> %passthru)
@@ -1355,13 +1355,13 @@ define <vscale x 4 x bfloat> @mgather_truemask_nxv4bf16(<vscale x 4 x ptr> %ptrs
define <vscale x 4 x bfloat> @mgather_falsemask_nxv4bf16(<vscale x 4 x ptr> %ptrs, <vscale x 4 x bfloat> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4bf16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4bf16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x bfloat> @llvm.masked.gather.nxv4bf16.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 2, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x bfloat> %passthru)
@@ -1558,13 +1558,13 @@ define <vscale x 4 x half> @mgather_truemask_nxv4f16(<vscale x 4 x ptr> %ptrs, <
define <vscale x 4 x half> @mgather_falsemask_nxv4f16(<vscale x 4 x ptr> %ptrs, <vscale x 4 x half> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4f16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x half> @llvm.masked.gather.nxv4f16.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 2, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x half> %passthru)
@@ -1760,13 +1760,13 @@ define <vscale x 4 x float> @mgather_truemask_nxv4f32(<vscale x 4 x ptr> %ptrs,
define <vscale x 4 x float> @mgather_falsemask_nxv4f32(<vscale x 4 x ptr> %ptrs, <vscale x 4 x float> %passthru) {
; RV32-LABEL: mgather_falsemask_nxv4f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv2r.v v8, v10
; RV32-NEXT: ret
;
; RV64-LABEL: mgather_falsemask_nxv4f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv2r.v v8, v12
; RV64-NEXT: ret
%v = call <vscale x 4 x float> @llvm.masked.gather.nxv4f32.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 4, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x float> %passthru)
@@ -2025,7 +2025,7 @@ define <vscale x 4 x double> @mgather_truemask_nxv4f64(<vscale x 4 x ptr> %ptrs,
define <vscale x 4 x double> @mgather_falsemask_nxv4f64(<vscale x 4 x ptr> %ptrs, <vscale x 4 x double> %passthru) {
; CHECK-LABEL: mgather_falsemask_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: ret
%v = call <vscale x 4 x double> @llvm.masked.gather.nxv4f64.nxv4p0(<vscale x 4 x ptr> %ptrs, i32 8, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x double> %passthru)
diff --git a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
index 096d9864f940a8..895bfcdf48af63 100644
--- a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
@@ -633,7 +633,7 @@ declare <vscale x 8 x half> @llvm.vp.nearbyint.nxv8f16(<vscale x 8 x half>, <vsc
define <vscale x 8 x half> @vp_nearbyint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -720,7 +720,7 @@ declare <vscale x 16 x half> @llvm.vp.nearbyint.nxv16f16(<vscale x 16 x half>, <
define <vscale x 16 x half> @vp_nearbyint_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -807,7 +807,7 @@ declare <vscale x 32 x half> @llvm.vp.nearbyint.nxv32f16(<vscale x 32 x half>, <
define <vscale x 32 x half> @vp_nearbyint_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -1039,7 +1039,7 @@ declare <vscale x 4 x float> @llvm.vp.nearbyint.nxv4f32(<vscale x 4 x float>, <v
define <vscale x 4 x float> @vp_nearbyint_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1084,7 +1084,7 @@ declare <vscale x 8 x float> @llvm.vp.nearbyint.nxv8f32(<vscale x 8 x float>, <v
define <vscale x 8 x float> @vp_nearbyint_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1129,7 +1129,7 @@ declare <vscale x 16 x float> @llvm.vp.nearbyint.nxv16f32(<vscale x 16 x float>,
define <vscale x 16 x float> @vp_nearbyint_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1216,7 +1216,7 @@ declare <vscale x 2 x double> @llvm.vp.nearbyint.nxv2f64(<vscale x 2 x double>,
define <vscale x 2 x double> @vp_nearbyint_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1261,7 +1261,7 @@ declare <vscale x 4 x double> @llvm.vp.nearbyint.nxv4f64(<vscale x 4 x double>,
define <vscale x 4 x double> @vp_nearbyint_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1306,7 +1306,7 @@ declare <vscale x 7 x double> @llvm.vp.nearbyint.nxv7f64(<vscale x 7 x double>,
define <vscale x 7 x double> @vp_nearbyint_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1351,7 +1351,7 @@ declare <vscale x 8 x double> @llvm.vp.nearbyint.nxv8f64(<vscale x 8 x double>,
define <vscale x 8 x double> @vp_nearbyint_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
index f40f828e834bec..1f0b10328740e9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
@@ -596,7 +596,7 @@ declare <vscale x 8 x half> @llvm.vp.rint.nxv8f16(<vscale x 8 x half>, <vscale x
define <vscale x 8 x half> @vp_rint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -675,7 +675,7 @@ declare <vscale x 16 x half> @llvm.vp.rint.nxv16f16(<vscale x 16 x half>, <vscal
define <vscale x 16 x half> @vp_rint_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -754,7 +754,7 @@ declare <vscale x 32 x half> @llvm.vp.rint.nxv32f16(<vscale x 32 x half>, <vscal
define <vscale x 32 x half> @vp_rint_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -981,7 +981,7 @@ declare <vscale x 4 x float> @llvm.vp.rint.nxv4f32(<vscale x 4 x float>, <vscale
define <vscale x 4 x float> @vp_rint_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1022,7 +1022,7 @@ declare <vscale x 8 x float> @llvm.vp.rint.nxv8f32(<vscale x 8 x float>, <vscale
define <vscale x 8 x float> @vp_rint_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1063,7 +1063,7 @@ declare <vscale x 16 x float> @llvm.vp.rint.nxv16f32(<vscale x 16 x float>, <vsc
define <vscale x 16 x float> @vp_rint_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1142,7 +1142,7 @@ declare <vscale x 2 x double> @llvm.vp.rint.nxv2f64(<vscale x 2 x double>, <vsca
define <vscale x 2 x double> @vp_rint_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1183,7 +1183,7 @@ declare <vscale x 4 x double> @llvm.vp.rint.nxv4f64(<vscale x 4 x double>, <vsca
define <vscale x 4 x double> @vp_rint_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1224,7 +1224,7 @@ declare <vscale x 7 x double> @llvm.vp.rint.nxv7f64(<vscale x 7 x double>, <vsca
define <vscale x 7 x double> @vp_rint_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1265,7 +1265,7 @@ declare <vscale x 8 x double> @llvm.vp.rint.nxv8f64(<vscale x 8 x double>, <vsca
define <vscale x 8 x double> @vp_rint_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
index b0bec88bd57d66..770a71502a7c7d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
@@ -649,7 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.round.nxv8f16(<vscale x 8 x half>, <vscale
define <vscale x 8 x half> @vp_round_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -736,7 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.round.nxv16f16(<vscale x 16 x half>, <vsca
define <vscale x 16 x half> @vp_round_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -823,7 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.round.nxv32f16(<vscale x 32 x half>, <vsca
define <vscale x 32 x half> @vp_round_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -1071,7 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.round.nxv4f32(<vscale x 4 x float>, <vscal
define <vscale x 4 x float> @vp_round_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1116,7 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.round.nxv8f32(<vscale x 8 x float>, <vscal
define <vscale x 8 x float> @vp_round_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1161,7 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.round.nxv16f32(<vscale x 16 x float>, <vs
define <vscale x 16 x float> @vp_round_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1248,7 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.round.nxv2f64(<vscale x 2 x double>, <vsc
define <vscale x 2 x double> @vp_round_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1293,7 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.round.nxv4f64(<vscale x 4 x double>, <vsc
define <vscale x 4 x double> @vp_round_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1338,7 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.round.nxv7f64(<vscale x 7 x double>, <vsc
define <vscale x 7 x double> @vp_round_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1383,7 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.round.nxv8f64(<vscale x 8 x double>, <vsc
define <vscale x 8 x double> @vp_round_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
index a5d346cfd81a7f..ddf95b8129c09c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
@@ -649,7 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.roundeven.nxv8f16(<vscale x 8 x half>, <vsc
define <vscale x 8 x half> @vp_roundeven_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -736,7 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.roundeven.nxv16f16(<vscale x 16 x half>, <
define <vscale x 16 x half> @vp_roundeven_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -823,7 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.roundeven.nxv32f16(<vscale x 32 x half>, <
define <vscale x 32 x half> @vp_roundeven_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -1071,7 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.roundeven.nxv4f32(<vscale x 4 x float>, <v
define <vscale x 4 x float> @vp_roundeven_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1116,7 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.roundeven.nxv8f32(<vscale x 8 x float>, <v
define <vscale x 8 x float> @vp_roundeven_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1161,7 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.roundeven.nxv16f32(<vscale x 16 x float>,
define <vscale x 16 x float> @vp_roundeven_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1248,7 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.roundeven.nxv2f64(<vscale x 2 x double>,
define <vscale x 2 x double> @vp_roundeven_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1293,7 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.roundeven.nxv4f64(<vscale x 4 x double>,
define <vscale x 4 x double> @vp_roundeven_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1338,7 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.roundeven.nxv7f64(<vscale x 7 x double>,
define <vscale x 7 x double> @vp_roundeven_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1383,7 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.roundeven.nxv8f64(<vscale x 8 x double>,
define <vscale x 8 x double> @vp_roundeven_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
index 13fdc11145001e..a27113a107556f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
@@ -649,7 +649,7 @@ declare <vscale x 8 x half> @llvm.vp.roundtozero.nxv8f16(<vscale x 8 x half>, <v
define <vscale x 8 x half> @vp_roundtozero_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
@@ -736,7 +736,7 @@ declare <vscale x 16 x half> @llvm.vp.roundtozero.nxv16f16(<vscale x 16 x half>,
define <vscale x 16 x half> @vp_roundtozero_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
@@ -823,7 +823,7 @@ declare <vscale x 32 x half> @llvm.vp.roundtozero.nxv32f16(<vscale x 32 x half>,
define <vscale x 32 x half> @vp_roundtozero_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
@@ -1071,7 +1071,7 @@ declare <vscale x 4 x float> @llvm.vp.roundtozero.nxv4f32(<vscale x 4 x float>,
define <vscale x 4 x float> @vp_roundtozero_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vfabs.v v12, v8, v0.t
@@ -1116,7 +1116,7 @@ declare <vscale x 8 x float> @llvm.vp.roundtozero.nxv8f32(<vscale x 8 x float>,
define <vscale x 8 x float> @vp_roundtozero_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfabs.v v16, v8, v0.t
@@ -1161,7 +1161,7 @@ declare <vscale x 16 x float> @llvm.vp.roundtozero.nxv16f32(<vscale x 16 x float
define <vscale x 16 x float> @vp_roundtozero_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vfabs.v v24, v8, v0.t
@@ -1248,7 +1248,7 @@ declare <vscale x 2 x double> @llvm.vp.roundtozero.nxv2f64(<vscale x 2 x double>
define <vscale x 2 x double> @vp_roundtozero_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
@@ -1293,7 +1293,7 @@ declare <vscale x 4 x double> @llvm.vp.roundtozero.nxv4f64(<vscale x 4 x double>
define <vscale x 4 x double> @vp_roundtozero_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
@@ -1338,7 +1338,7 @@ declare <vscale x 7 x double> @llvm.vp.roundtozero.nxv7f64(<vscale x 7 x double>
define <vscale x 7 x double> @vp_roundtozero_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
@@ -1383,7 +1383,7 @@ declare <vscale x 8 x double> @llvm.vp.roundtozero.nxv8f64(<vscale x 8 x double>
define <vscale x 8 x double> @vp_roundtozero_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
index 30180e4c41218e..f636ab9ebd0ce7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
@@ -17,7 +17,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: slli a1, a1, 1
; SPILL-O0-NEXT: sub sp, sp, a1
; SPILL-O0-NEXT: sw a0, 8(sp) # 4-byte Folded Spill
-; SPILL-O0-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; SPILL-O0-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; SPILL-O0-NEXT: vmv1r.v v10, v9
; SPILL-O0-NEXT: vmv1r.v v9, v8
; SPILL-O0-NEXT: csrr a1, vlenb
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
index a7a134db93b0eb..2cd80ef79bd829 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
@@ -20,7 +20,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: slli a1, a1, 1
; SPILL-O0-NEXT: sub sp, sp, a1
; SPILL-O0-NEXT: sd a0, 16(sp) # 8-byte Folded Spill
-; SPILL-O0-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; SPILL-O0-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; SPILL-O0-NEXT: vmv1r.v v10, v9
; SPILL-O0-NEXT: vmv1r.v v9, v8
; SPILL-O0-NEXT: csrr a1, vlenb
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
index 2e3f7c9bc2a6ba..53d1666c30e960 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
@@ -47,7 +47,7 @@ define <vscale x 16 x i32> @foo(i32 %0, i32 %1, i32 %2, i32 %3, i32 %4, i32 %5,
; CHECK-NEXT: vs8r.v v8, (t1)
; CHECK-NEXT: sd t1, 0(sp)
; CHECK-NEXT: sd t0, 8(sp)
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: call bar
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
index ebafaed09929a5..02279362ca14be 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
@@ -941,7 +941,7 @@ declare <vscale x 2 x i32> @llvm.riscv.vredsum.nxv2i32.nxv2i32(
define <vscale x 2 x i32> @vredsum(<vscale x 2 x i32> %passthru, <vscale x 2 x i32> %x, <vscale x 2 x i32> %y, <vscale x 2 x i1> %m, i64 %vl) {
; CHECK-LABEL: vredsum:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, tu, ma
; CHECK-NEXT: vredsum.vs v11, v9, v10
@@ -966,7 +966,7 @@ define <vscale x 2 x float> @vfredusum(<vscale x 2 x float> %passthru, <vscale x
; CHECK-LABEL: vfredusum:
; CHECK: # %bb.0:
; CHECK-NEXT: fsrmi a1, 0
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, tu, ma
; CHECK-NEXT: vfredusum.vs v11, v9, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
index e1cd2dade3124f..ecd098edb30aee 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
@@ -663,7 +663,7 @@ declare <vscale x 3 x double> @llvm.experimental.vp.strided.load.nxv3f64.p0.i32(
define <vscale x 16 x double> @strided_load_nxv16f64(ptr %ptr, i64 %stride, <vscale x 16 x i1> %mask, i32 zeroext %evl) {
; CHECK-RV32-LABEL: strided_load_nxv16f64:
; CHECK-RV32: # %bb.0:
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v9, v0
; CHECK-RV32-NEXT: csrr a4, vlenb
; CHECK-RV32-NEXT: sub a2, a3, a4
@@ -689,7 +689,7 @@ define <vscale x 16 x double> @strided_load_nxv16f64(ptr %ptr, i64 %stride, <vsc
;
; CHECK-RV64-LABEL: strided_load_nxv16f64:
; CHECK-RV64: # %bb.0:
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v9, v0
; CHECK-RV64-NEXT: csrr a4, vlenb
; CHECK-RV64-NEXT: sub a3, a2, a4
@@ -767,7 +767,7 @@ declare <vscale x 16 x double> @llvm.experimental.vp.strided.load.nxv16f64.p0.i6
define <vscale x 16 x double> @strided_load_nxv17f64(ptr %ptr, i64 %stride, <vscale x 17 x i1> %mask, i32 zeroext %evl, ptr %hi_ptr) {
; CHECK-RV32-LABEL: strided_load_nxv17f64:
; CHECK-RV32: # %bb.0:
-; CHECK-RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV32-NEXT: vmv1r.v v8, v0
; CHECK-RV32-NEXT: csrr a2, vlenb
; CHECK-RV32-NEXT: slli a7, a2, 1
@@ -815,7 +815,7 @@ define <vscale x 16 x double> @strided_load_nxv17f64(ptr %ptr, i64 %stride, <vsc
;
; CHECK-RV64-LABEL: strided_load_nxv17f64:
; CHECK-RV64: # %bb.0:
-; CHECK-RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-RV64-NEXT: vmv1r.v v8, v0
; CHECK-RV64-NEXT: csrr a4, vlenb
; CHECK-RV64-NEXT: slli a7, a4, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll b/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
index c523a6c748ce1f..9dcee7e5cb7d1e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
@@ -615,7 +615,7 @@ define void @strided_store_nxv17f64(<vscale x 17 x double> %v, ptr %ptr, i32 sig
; CHECK-NEXT: slli a4, a4, 3
; CHECK-NEXT: sub sp, sp, a4
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: addi a4, sp, 16
; CHECK-NEXT: vs8r.v v16, (a4) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vcpop.ll b/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
index dbe0780ce9bb89..c3a60ebf92b3c7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
@@ -43,7 +43,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv1i1(
define iXLen @intrinsic_vcpop_mask_m_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -98,7 +98,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv2i1(
define iXLen @intrinsic_vcpop_mask_m_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -139,7 +139,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv4i1(
define iXLen @intrinsic_vcpop_mask_m_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -180,7 +180,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv8i1(
define iXLen @intrinsic_vcpop_mask_m_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -221,7 +221,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv16i1(
define iXLen @intrinsic_vcpop_mask_m_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -262,7 +262,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv32i1(
define iXLen @intrinsic_vcpop_mask_m_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -303,7 +303,7 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv64i1(
define iXLen @intrinsic_vcpop_mask_m_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll b/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
index 81ab670c3c283e..9084384c2e17a5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
@@ -120,7 +120,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_passthru(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2) nounwind {
; CHECK-LABEL: vadd_vv_passthru:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, tu, ma
; CHECK-NEXT: vadd.vv v10, v8, v9
@@ -153,7 +153,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_passthru_negative(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2) nounwind {
; CHECK-LABEL: vadd_vv_passthru_negative:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, tu, ma
; CHECK-NEXT: vadd.vv v10, v8, v9
@@ -185,7 +185,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_mask(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2, <vscale x 1 x i1> %m) nounwind {
; CHECK-LABEL: vadd_vv_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vadd.vv v10, v8, v9, v0.t
@@ -221,7 +221,7 @@ entry:
define <vscale x 1 x i8> @vadd_vv_mask_negative(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2, <vscale x 1 x i1> %m, <vscale x 1 x i1> %m2) nounwind {
; CHECK-LABEL: vadd_vv_mask_negative:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vadd.vv v11, v8, v9, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfirst.ll b/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
index 906d0642143cb9..15d4a8ee6fcb1e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
@@ -43,7 +43,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv1i1(
define iXLen @intrinsic_vfirst_mask_m_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -98,7 +98,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv2i1(
define iXLen @intrinsic_vfirst_mask_m_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -139,7 +139,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv4i1(
define iXLen @intrinsic_vfirst_mask_m_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -180,7 +180,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv8i1(
define iXLen @intrinsic_vfirst_mask_m_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -221,7 +221,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv16i1(
define iXLen @intrinsic_vfirst_mask_m_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -262,7 +262,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv32i1(
define iXLen @intrinsic_vfirst_mask_m_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -303,7 +303,7 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv64i1(
define iXLen @intrinsic_vfirst_mask_m_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
index c969e34411fc90..e08293f86da371 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
@@ -7796,7 +7796,7 @@ define <vscale x 16 x half> @vfnmadd_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vfnmadd_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: lui a1, 8
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
@@ -8254,7 +8254,7 @@ define <vscale x 16 x half> @vfnmsub_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vfnmsub_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: lui a1, 8
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
@@ -8714,7 +8714,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: add a2, sp, a2
; ZVFHMIN-NEXT: addi a2, a2, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v8
; ZVFHMIN-NEXT: vl8re16.v v8, (a0)
; ZVFHMIN-NEXT: lui a2, 8
diff --git a/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll b/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
index fa67fa38144aae..469b3ed99542e6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
@@ -111,7 +111,7 @@ define <vscale x 4 x i32> @different_vl_with_ta(<vscale x 4 x i32> %a, <vscale x
define <vscale x 4 x i32> @different_vl_with_tu(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: different_vl_with_tu:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v14, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v14, v10, v12
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
index 2962f796cf10c0..57b12450ceb56c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
@@ -51,7 +51,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_dead_vl(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask) {
; CHECK-LABEL: test_vlseg2ff_mask_dead_vl:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
index 4ec7dae561fad4..129f800571bbcd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
@@ -25,7 +25,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -65,7 +65,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -105,7 +105,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -145,7 +145,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -185,7 +185,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -225,7 +225,7 @@ entry:
define <vscale x 32 x i8> @test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 32 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -265,7 +265,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t(target("riscv.vector.tuple", <vscale x 1 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -306,7 +306,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -347,7 +347,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -388,7 +388,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -429,7 +429,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -470,7 +470,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t(target("riscv.vector.tuple", <vscale x 1 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -512,7 +512,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -554,7 +554,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -596,7 +596,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -638,7 +638,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -680,7 +680,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t(target("riscv.vector.tuple", <vscale x 1 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -723,7 +723,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -766,7 +766,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -809,7 +809,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -852,7 +852,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t(target("riscv.vector.tuple", <vscale x 1 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -896,7 +896,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -940,7 +940,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -984,7 +984,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1028,7 +1028,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t(target("riscv.vector.tuple", <vscale x 1 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1073,7 +1073,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1118,7 +1118,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1163,7 +1163,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1208,7 +1208,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t(target("riscv.vector.tuple", <vscale x 1 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1254,7 +1254,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1300,7 +1300,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1346,7 +1346,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1391,7 +1391,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1430,7 +1430,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1469,7 +1469,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1508,7 +1508,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1547,7 +1547,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1586,7 +1586,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1626,7 +1626,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1666,7 +1666,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1706,7 +1706,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1746,7 +1746,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1787,7 +1787,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1828,7 +1828,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1869,7 +1869,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1910,7 +1910,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1952,7 +1952,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1994,7 +1994,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2036,7 +2036,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2079,7 +2079,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2122,7 +2122,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2165,7 +2165,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2209,7 +2209,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2253,7 +2253,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2297,7 +2297,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2342,7 +2342,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2387,7 +2387,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2432,7 +2432,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -2471,7 +2471,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -2510,7 +2510,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -2549,7 +2549,7 @@ entry:
define <vscale x 8 x i32> @test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -2588,7 +2588,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2628,7 +2628,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2668,7 +2668,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2708,7 +2708,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2749,7 +2749,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2790,7 +2790,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2831,7 +2831,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2873,7 +2873,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2915,7 +2915,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2958,7 +2958,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3001,7 +3001,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3045,7 +3045,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3089,7 +3089,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3134,7 +3134,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3179,7 +3179,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -3218,7 +3218,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -3257,7 +3257,7 @@ entry:
define <vscale x 4 x i64> @test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -3296,7 +3296,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3336,7 +3336,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3376,7 +3376,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3417,7 +3417,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3458,7 +3458,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3500,7 +3500,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3543,7 +3543,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3587,7 +3587,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3631,7 +3631,7 @@ entry:
define <vscale x 1 x half> @test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -3669,7 +3669,7 @@ entry:
define <vscale x 2 x half> @test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -3707,7 +3707,7 @@ entry:
define <vscale x 4 x half> @test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -3745,7 +3745,7 @@ entry:
define <vscale x 8 x half> @test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -3783,7 +3783,7 @@ entry:
define <vscale x 16 x half> @test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -3821,7 +3821,7 @@ entry:
define <vscale x 1 x half> @test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3860,7 +3860,7 @@ entry:
define <vscale x 2 x half> @test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3899,7 +3899,7 @@ entry:
define <vscale x 4 x half> @test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3938,7 +3938,7 @@ entry:
define <vscale x 8 x half> @test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3977,7 +3977,7 @@ entry:
define <vscale x 1 x half> @test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4017,7 +4017,7 @@ entry:
define <vscale x 2 x half> @test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4057,7 +4057,7 @@ entry:
define <vscale x 4 x half> @test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4097,7 +4097,7 @@ entry:
define <vscale x 8 x half> @test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4137,7 +4137,7 @@ entry:
define <vscale x 1 x half> @test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4178,7 +4178,7 @@ entry:
define <vscale x 2 x half> @test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4219,7 +4219,7 @@ entry:
define <vscale x 4 x half> @test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4260,7 +4260,7 @@ entry:
define <vscale x 1 x half> @test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4302,7 +4302,7 @@ entry:
define <vscale x 2 x half> @test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4344,7 +4344,7 @@ entry:
define <vscale x 4 x half> @test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4386,7 +4386,7 @@ entry:
define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4429,7 +4429,7 @@ entry:
define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4472,7 +4472,7 @@ entry:
define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4515,7 +4515,7 @@ entry:
define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4559,7 +4559,7 @@ entry:
define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4603,7 +4603,7 @@ entry:
define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4647,7 +4647,7 @@ entry:
define <vscale x 1 x float> @test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -4685,7 +4685,7 @@ entry:
define <vscale x 2 x float> @test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -4723,7 +4723,7 @@ entry:
define <vscale x 4 x float> @test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -4761,7 +4761,7 @@ entry:
define <vscale x 8 x float> @test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -4799,7 +4799,7 @@ entry:
define <vscale x 1 x float> @test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4838,7 +4838,7 @@ entry:
define <vscale x 2 x float> @test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4877,7 +4877,7 @@ entry:
define <vscale x 4 x float> @test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4916,7 +4916,7 @@ entry:
define <vscale x 1 x float> @test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4956,7 +4956,7 @@ entry:
define <vscale x 2 x float> @test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4996,7 +4996,7 @@ entry:
define <vscale x 4 x float> @test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5036,7 +5036,7 @@ entry:
define <vscale x 1 x float> @test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5077,7 +5077,7 @@ entry:
define <vscale x 2 x float> @test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5118,7 +5118,7 @@ entry:
define <vscale x 1 x float> @test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5160,7 +5160,7 @@ entry:
define <vscale x 2 x float> @test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5202,7 +5202,7 @@ entry:
define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5245,7 +5245,7 @@ entry:
define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5288,7 +5288,7 @@ entry:
define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5332,7 +5332,7 @@ entry:
define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5376,7 +5376,7 @@ entry:
define <vscale x 1 x double> @test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -5414,7 +5414,7 @@ entry:
define <vscale x 2 x double> @test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -5452,7 +5452,7 @@ entry:
define <vscale x 4 x double> @test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -5490,7 +5490,7 @@ entry:
define <vscale x 1 x double> @test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5529,7 +5529,7 @@ entry:
define <vscale x 2 x double> @test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5568,7 +5568,7 @@ entry:
define <vscale x 1 x double> @test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5608,7 +5608,7 @@ entry:
define <vscale x 2 x double> @test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5648,7 +5648,7 @@ entry:
define <vscale x 1 x double> @test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5689,7 +5689,7 @@ entry:
define <vscale x 1 x double> @test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5731,7 +5731,7 @@ entry:
define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5774,7 +5774,7 @@ entry:
define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5818,7 +5818,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -5856,7 +5856,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -5894,7 +5894,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -5932,7 +5932,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -5970,7 +5970,7 @@ entry:
define <vscale x 16 x bfloat> @test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -6008,7 +6008,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6047,7 +6047,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6086,7 +6086,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6125,7 +6125,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6164,7 +6164,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6204,7 +6204,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6244,7 +6244,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6284,7 +6284,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6324,7 +6324,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6365,7 +6365,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6406,7 +6406,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6447,7 +6447,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6489,7 +6489,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6531,7 +6531,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6573,7 +6573,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6616,7 +6616,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6659,7 +6659,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6702,7 +6702,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6746,7 +6746,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6790,7 +6790,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
index e7d4195e2174f1..5e0f9515806c24 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
@@ -51,7 +51,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_dead_vl(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask) {
; CHECK-LABEL: test_vlseg2ff_mask_dead_vl:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
index 5da529a73a5240..5be6e9e86efec7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
@@ -25,7 +25,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -65,7 +65,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -105,7 +105,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -145,7 +145,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -185,7 +185,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -225,7 +225,7 @@ entry:
define <vscale x 32 x i8> @test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 32 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -265,7 +265,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t(target("riscv.vector.tuple", <vscale x 1 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -306,7 +306,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -347,7 +347,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -388,7 +388,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -429,7 +429,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -470,7 +470,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t(target("riscv.vector.tuple", <vscale x 1 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -512,7 +512,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -554,7 +554,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -596,7 +596,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -638,7 +638,7 @@ entry:
define <vscale x 16 x i8> @test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -680,7 +680,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t(target("riscv.vector.tuple", <vscale x 1 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -723,7 +723,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -766,7 +766,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -809,7 +809,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -852,7 +852,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t(target("riscv.vector.tuple", <vscale x 1 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -896,7 +896,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -940,7 +940,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -984,7 +984,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1028,7 +1028,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t(target("riscv.vector.tuple", <vscale x 1 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1073,7 +1073,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1118,7 +1118,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1163,7 +1163,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1208,7 +1208,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t(target("riscv.vector.tuple", <vscale x 1 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1254,7 +1254,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1300,7 +1300,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1346,7 +1346,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1391,7 +1391,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1430,7 +1430,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1469,7 +1469,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1508,7 +1508,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1547,7 +1547,7 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1586,7 +1586,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1626,7 +1626,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1666,7 +1666,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1706,7 +1706,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1746,7 +1746,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1787,7 +1787,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1828,7 +1828,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1869,7 +1869,7 @@ entry:
define <vscale x 8 x i16> @test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -1910,7 +1910,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1952,7 +1952,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1994,7 +1994,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2036,7 +2036,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2079,7 +2079,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2122,7 +2122,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2165,7 +2165,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2209,7 +2209,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2253,7 +2253,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2297,7 +2297,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2342,7 +2342,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2387,7 +2387,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2432,7 +2432,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -2471,7 +2471,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -2510,7 +2510,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -2549,7 +2549,7 @@ entry:
define <vscale x 8 x i32> @test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -2588,7 +2588,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2628,7 +2628,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2668,7 +2668,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2708,7 +2708,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2749,7 +2749,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2790,7 +2790,7 @@ entry:
define <vscale x 4 x i32> @test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -2831,7 +2831,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2873,7 +2873,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2915,7 +2915,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2958,7 +2958,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3001,7 +3001,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3045,7 +3045,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3089,7 +3089,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3134,7 +3134,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3179,7 +3179,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -3218,7 +3218,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -3257,7 +3257,7 @@ entry:
define <vscale x 4 x i64> @test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -3296,7 +3296,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3336,7 +3336,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3376,7 +3376,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3417,7 +3417,7 @@ entry:
define <vscale x 2 x i64> @test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3458,7 +3458,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3500,7 +3500,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3543,7 +3543,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3587,7 +3587,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3631,7 +3631,7 @@ entry:
define <vscale x 1 x half> @test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -3669,7 +3669,7 @@ entry:
define <vscale x 2 x half> @test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -3707,7 +3707,7 @@ entry:
define <vscale x 4 x half> @test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -3745,7 +3745,7 @@ entry:
define <vscale x 8 x half> @test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -3783,7 +3783,7 @@ entry:
define <vscale x 16 x half> @test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -3821,7 +3821,7 @@ entry:
define <vscale x 1 x half> @test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3860,7 +3860,7 @@ entry:
define <vscale x 2 x half> @test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3899,7 +3899,7 @@ entry:
define <vscale x 4 x half> @test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3938,7 +3938,7 @@ entry:
define <vscale x 8 x half> @test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -3977,7 +3977,7 @@ entry:
define <vscale x 1 x half> @test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4017,7 +4017,7 @@ entry:
define <vscale x 2 x half> @test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4057,7 +4057,7 @@ entry:
define <vscale x 4 x half> @test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4097,7 +4097,7 @@ entry:
define <vscale x 8 x half> @test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4137,7 +4137,7 @@ entry:
define <vscale x 1 x half> @test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4178,7 +4178,7 @@ entry:
define <vscale x 2 x half> @test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4219,7 +4219,7 @@ entry:
define <vscale x 4 x half> @test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4260,7 +4260,7 @@ entry:
define <vscale x 1 x half> @test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4302,7 +4302,7 @@ entry:
define <vscale x 2 x half> @test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4344,7 +4344,7 @@ entry:
define <vscale x 4 x half> @test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4386,7 +4386,7 @@ entry:
define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4429,7 +4429,7 @@ entry:
define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4472,7 +4472,7 @@ entry:
define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4515,7 +4515,7 @@ entry:
define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4559,7 +4559,7 @@ entry:
define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4603,7 +4603,7 @@ entry:
define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4647,7 +4647,7 @@ entry:
define <vscale x 1 x float> @test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -4685,7 +4685,7 @@ entry:
define <vscale x 2 x float> @test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -4723,7 +4723,7 @@ entry:
define <vscale x 4 x float> @test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -4761,7 +4761,7 @@ entry:
define <vscale x 8 x float> @test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -4799,7 +4799,7 @@ entry:
define <vscale x 1 x float> @test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4838,7 +4838,7 @@ entry:
define <vscale x 2 x float> @test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4877,7 +4877,7 @@ entry:
define <vscale x 4 x float> @test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -4916,7 +4916,7 @@ entry:
define <vscale x 1 x float> @test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4956,7 +4956,7 @@ entry:
define <vscale x 2 x float> @test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4996,7 +4996,7 @@ entry:
define <vscale x 4 x float> @test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5036,7 +5036,7 @@ entry:
define <vscale x 1 x float> @test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5077,7 +5077,7 @@ entry:
define <vscale x 2 x float> @test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5118,7 +5118,7 @@ entry:
define <vscale x 1 x float> @test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5160,7 +5160,7 @@ entry:
define <vscale x 2 x float> @test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5202,7 +5202,7 @@ entry:
define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5245,7 +5245,7 @@ entry:
define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5288,7 +5288,7 @@ entry:
define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5332,7 +5332,7 @@ entry:
define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5376,7 +5376,7 @@ entry:
define <vscale x 1 x double> @test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -5414,7 +5414,7 @@ entry:
define <vscale x 2 x double> @test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -5452,7 +5452,7 @@ entry:
define <vscale x 4 x double> @test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -5490,7 +5490,7 @@ entry:
define <vscale x 1 x double> @test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5529,7 +5529,7 @@ entry:
define <vscale x 2 x double> @test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5568,7 +5568,7 @@ entry:
define <vscale x 1 x double> @test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5608,7 +5608,7 @@ entry:
define <vscale x 2 x double> @test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -5648,7 +5648,7 @@ entry:
define <vscale x 1 x double> @test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5689,7 +5689,7 @@ entry:
define <vscale x 1 x double> @test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5731,7 +5731,7 @@ entry:
define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5774,7 +5774,7 @@ entry:
define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5818,7 +5818,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -5856,7 +5856,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -5894,7 +5894,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -5932,7 +5932,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -5970,7 +5970,7 @@ entry:
define <vscale x 16 x bfloat> @test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -6008,7 +6008,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6047,7 +6047,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6086,7 +6086,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6125,7 +6125,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6164,7 +6164,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6204,7 +6204,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6244,7 +6244,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6284,7 +6284,7 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
@@ -6324,7 +6324,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6365,7 +6365,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6406,7 +6406,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6447,7 +6447,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6489,7 +6489,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6531,7 +6531,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6573,7 +6573,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6616,7 +6616,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6659,7 +6659,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6702,7 +6702,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6746,7 +6746,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6790,7 +6790,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
index 7216789241f9c8..b7c990e0251396 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v10
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfeq.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfeq_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v12
@@ -294,7 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -346,7 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v10
@@ -450,7 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v12
@@ -502,7 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v9
@@ -554,7 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v10
@@ -606,7 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfeq.vv v0, v8, v12
@@ -658,7 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -706,7 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -754,7 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -802,7 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -850,7 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfeq.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfeq_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -898,7 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -946,7 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -994,7 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1042,7 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1090,7 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1138,7 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1186,7 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
index c50653730b38ca..04fb0977db9cf8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v10, v8
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfge.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfge_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v12, v8
@@ -294,7 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -346,7 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -398,7 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v10, v8
@@ -450,7 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v12, v8
@@ -502,7 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v9, v8
@@ -554,7 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v10, v8
@@ -606,7 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v12, v8
@@ -658,7 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -706,7 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -754,7 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -802,7 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -850,7 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfge.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfge_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -898,7 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -946,7 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -994,7 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1042,7 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1090,7 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1138,7 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1186,7 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
index 6370e8287c77e3..2b7d84c8cadb42 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v10, v8
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfgt.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfgt_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v12, v8
@@ -294,7 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -346,7 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -398,7 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v10, v8
@@ -450,7 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v12, v8
@@ -502,7 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v9, v8
@@ -554,7 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v10, v8
@@ -606,7 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v12, v8
@@ -658,7 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -706,7 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -754,7 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -802,7 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -850,7 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfgt.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfgt_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -898,7 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -946,7 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -994,7 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1042,7 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1090,7 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1138,7 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1186,7 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
index d9ccc19b62095f..9594a2e368ad95 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v10
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfle.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfle_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v12
@@ -294,7 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -346,7 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v10
@@ -450,7 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v12
@@ -502,7 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v9
@@ -554,7 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v10
@@ -606,7 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfle.vv v0, v8, v12
@@ -658,7 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -706,7 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -754,7 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -802,7 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -850,7 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfle.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfle_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -898,7 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -946,7 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -994,7 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1042,7 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1090,7 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1138,7 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1186,7 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
index d6b2163a82ca08..8ebc35d4f01e5f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v10
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmflt.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmflt_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v12
@@ -294,7 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -346,7 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v10
@@ -450,7 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v12
@@ -502,7 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v9
@@ -554,7 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v10
@@ -606,7 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vv v0, v8, v12
@@ -658,7 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -706,7 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -754,7 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -802,7 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -850,7 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmflt.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmflt_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -898,7 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -946,7 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -994,7 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1042,7 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1090,7 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1138,7 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1186,7 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
index e93f6cec3971ce..d5f7387ec1ddce 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v10
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfne.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfne_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v12
@@ -294,7 +294,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -346,7 +346,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v10
@@ -450,7 +450,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v12
@@ -502,7 +502,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v9
@@ -554,7 +554,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v10
@@ -606,7 +606,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfne.vv v0, v8, v12
@@ -658,7 +658,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -706,7 +706,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -754,7 +754,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -802,7 +802,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -850,7 +850,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfne.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfne_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -898,7 +898,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -946,7 +946,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -994,7 +994,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -1042,7 +1042,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -1090,7 +1090,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -1138,7 +1138,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -1186,7 +1186,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll b/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
index 023e0ffdea0c24..669ed0ba89ab50 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
@@ -31,7 +31,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsbf.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -74,7 +74,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsbf.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsbf_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -117,7 +117,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsbf.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsbf_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -160,7 +160,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsbf.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsbf_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -203,7 +203,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsbf.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsbf_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -246,7 +246,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsbf.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsbf_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -289,7 +289,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsbf.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
index 4ba813e88faf07..f955fa85149719 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmseq.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v9
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v10
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmseq.vv v0, v8, v12
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmseq.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
index 62d4a6df5ddd81..7675fb91dc261b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsge.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v9, v8
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v10, v8
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v12, v8
@@ -971,7 +971,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1020,7 +1020,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1069,7 +1069,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1090,7 +1090,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i8_i8_1(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i8_i8_1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: li a1, 99
; CHECK-NEXT: vmv1r.v v0, v9
@@ -1158,7 +1158,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1207,7 +1207,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1256,7 +1256,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsge.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1305,7 +1305,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1403,7 +1403,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1452,7 +1452,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1501,7 +1501,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1550,7 +1550,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1599,7 +1599,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1648,7 +1648,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1697,7 +1697,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1773,7 +1773,7 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1849,7 +1849,7 @@ define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1925,7 +1925,7 @@ define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1961,7 +1961,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1997,7 +1997,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -2064,7 +2064,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2100,7 +2100,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2136,7 +2136,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2172,7 +2172,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2208,7 +2208,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2244,7 +2244,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2280,7 +2280,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2316,7 +2316,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2352,7 +2352,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2388,7 +2388,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2424,7 +2424,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2460,7 +2460,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2496,7 +2496,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2532,7 +2532,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2568,7 +2568,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2604,7 +2604,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
index ab0e75ef09e53e..763b6b6167c16d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgeu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v9, v8
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v10, v8
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v12, v8
@@ -971,7 +971,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1020,7 +1020,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1069,7 +1069,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1118,7 +1118,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1167,7 +1167,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1216,7 +1216,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgeu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1265,7 +1265,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1314,7 +1314,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1363,7 +1363,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1412,7 +1412,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1461,7 +1461,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1510,7 +1510,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1559,7 +1559,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1608,7 +1608,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1657,7 +1657,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1733,7 +1733,7 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1809,7 +1809,7 @@ define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1885,7 +1885,7 @@ define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1921,7 +1921,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1957,7 +1957,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1993,7 +1993,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2014,7 +2014,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i8_i8_1(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i8_i8_1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: li a1, 99
; CHECK-NEXT: vmv1r.v v0, v9
@@ -2051,7 +2051,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2087,7 +2087,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2123,7 +2123,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2159,7 +2159,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2258,7 +2258,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2294,7 +2294,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2330,7 +2330,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2366,7 +2366,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2402,7 +2402,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2438,7 +2438,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2474,7 +2474,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2510,7 +2510,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2546,7 +2546,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2582,7 +2582,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
index a0b1ac655a0dd2..ea54351268288a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgt.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v9, v8
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v10, v8
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v12, v8
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgt.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
index c29df31a3e6008..d59a200a8185f7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgtu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v9, v8
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v10, v8
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v12, v8
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgtu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsif.ll b/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
index b666ab5ab87b1a..7a068d1f3b7f8b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
@@ -31,7 +31,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsif.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsif_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -74,7 +74,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsif.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsif_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -117,7 +117,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsif.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsif_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -160,7 +160,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsif.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsif_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -203,7 +203,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsif.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsif_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -246,7 +246,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsif.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsif_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -289,7 +289,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsif.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsif_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
index c849b41461eb4e..209ec617674bfa 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsle.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v9
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v10
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsle.vv v0, v8, v12
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsle.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
index ccd70da73aabb5..0c5eb11e0b1ca2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsleu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v9
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v10
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsleu.vv v0, v8, v12
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsleu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
index c60f1d4ce0d2e5..c71fc2a1b3b424 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmslt.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v9
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v10
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmslt.vv v0, v8, v12
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmslt.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
index a9f4dc45a9aebd..52bb3c746b7dca 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsltu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v9
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v10
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsltu.vv v0, v8, v12
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsltu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
index 43f09561bf7526..15a496baed9888 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
@@ -34,7 +34,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -86,7 +86,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -138,7 +138,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -190,7 +190,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -242,7 +242,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -294,7 +294,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsne.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -346,7 +346,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -398,7 +398,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -450,7 +450,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -502,7 +502,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -554,7 +554,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -606,7 +606,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -658,7 +658,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -710,7 +710,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -762,7 +762,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -814,7 +814,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v9
@@ -866,7 +866,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v10
@@ -918,7 +918,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsne.vv v0, v8, v12
@@ -970,7 +970,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
@@ -1018,7 +1018,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
@@ -1066,7 +1066,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
@@ -1114,7 +1114,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
@@ -1162,7 +1162,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
@@ -1210,7 +1210,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsne.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
@@ -1258,7 +1258,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
@@ -1306,7 +1306,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
@@ -1354,7 +1354,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
@@ -1402,7 +1402,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
@@ -1450,7 +1450,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
@@ -1498,7 +1498,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
@@ -1546,7 +1546,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
@@ -1594,7 +1594,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
@@ -1642,7 +1642,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
@@ -1717,7 +1717,7 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
@@ -1792,7 +1792,7 @@ define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
@@ -1867,7 +1867,7 @@ define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
@@ -1903,7 +1903,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -1939,7 +1939,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -1975,7 +1975,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -2011,7 +2011,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -2047,7 +2047,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -2083,7 +2083,7 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -2119,7 +2119,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
@@ -2155,7 +2155,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
@@ -2191,7 +2191,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
@@ -2227,7 +2227,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
@@ -2263,7 +2263,7 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
@@ -2299,7 +2299,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
@@ -2335,7 +2335,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
@@ -2371,7 +2371,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
@@ -2407,7 +2407,7 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
@@ -2443,7 +2443,7 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
@@ -2479,7 +2479,7 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
@@ -2515,7 +2515,7 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsof.ll b/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
index e80131315b5e79..c243bd647469e0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
@@ -31,7 +31,7 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsof.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsof_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
@@ -74,7 +74,7 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsof.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsof_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
@@ -117,7 +117,7 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsof.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsof_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
@@ -160,7 +160,7 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsof.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsof_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
@@ -203,7 +203,7 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsof.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsof_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
@@ -246,7 +246,7 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsof.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsof_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
@@ -289,7 +289,7 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsof.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsof_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll b/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
index 7107063693a08d..09d4aadf968f95 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
@@ -49,7 +49,7 @@ define <vscale x 4 x i32> @vadd_same_passthru(<vscale x 4 x i32> %passthru, <vsc
define <vscale x 4 x i32> @unfoldable_diff_avl_unknown(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: unfoldable_diff_avl_unknown:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v14, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v14, v10, v12
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll b/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
index fa2f750c6000c3..9028b82a70dd51 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
@@ -5,7 +5,7 @@
define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
; RV32-LABEL: bool_vec:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v9, v0
; RV32-NEXT: vmv1r.v v0, v8
; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -18,7 +18,7 @@ define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
;
; RV64-LABEL: bool_vec:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v9, v0
; RV64-NEXT: slli a0, a0, 32
; RV64-NEXT: srli a0, a0, 32
@@ -37,7 +37,7 @@ define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
define iXLen @bool_vec_zero_poison(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
; RV32-LABEL: bool_vec_zero_poison:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV32-NEXT: vmv1r.v v9, v0
; RV32-NEXT: vmv1r.v v0, v8
; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -46,7 +46,7 @@ define iXLen @bool_vec_zero_poison(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m,
;
; RV64-LABEL: bool_vec_zero_poison:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; RV64-NEXT: vmv1r.v v9, v0
; RV64-NEXT: slli a0, a0, 32
; RV64-NEXT: srli a0, a0, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-select.ll b/llvm/test/CodeGen/RISCV/rvv/vp-select.ll
index 5aaf33a4c4406e..652baf6692341f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-select.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-select.ll
@@ -12,7 +12,7 @@ define <vscale x 1 x i64> @all_ones(<vscale x 1 x i64> %true, <vscale x 1 x i64>
define <vscale x 1 x i64> @all_zeroes(<vscale x 1 x i64> %true, <vscale x 1 x i64> %false, i32 %evl) {
; CHECK-LABEL: all_zeroes:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: ret
%v = call <vscale x 1 x i64> @llvm.vp.select.nxv1i64(<vscale x 1 x i1> splat (i1 false), <vscale x 1 x i64> %true, <vscale x 1 x i64> %false, i32 %evl)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
index a03a12871fc470..c4084c9225e876 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
@@ -10,7 +10,7 @@ declare <16 x i1> @llvm.experimental.vp.splice.v16i1(<16 x i1>, <16 x i1>, i32,
define <2 x i1> @test_vp_splice_v2i1(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -35,7 +35,7 @@ define <2 x i1> @test_vp_splice_v2i1(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %ev
define <2 x i1> @test_vp_splice_v2i1_negative_offset(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -60,7 +60,7 @@ define <2 x i1> @test_vp_splice_v2i1_negative_offset(<2 x i1> %va, <2 x i1> %vb,
define <2 x i1> @test_vp_splice_v2i1_masked(<2 x i1> %va, <2 x i1> %vb, <2 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -86,7 +86,7 @@ define <2 x i1> @test_vp_splice_v2i1_masked(<2 x i1> %va, <2 x i1> %vb, <2 x i1>
define <4 x i1> @test_vp_splice_v4i1(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -111,7 +111,7 @@ define <4 x i1> @test_vp_splice_v4i1(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %ev
define <4 x i1> @test_vp_splice_v4i1_negative_offset(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -136,7 +136,7 @@ define <4 x i1> @test_vp_splice_v4i1_negative_offset(<4 x i1> %va, <4 x i1> %vb,
define <4 x i1> @test_vp_splice_v4i1_masked(<4 x i1> %va, <4 x i1> %vb, <4 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -162,7 +162,7 @@ define <4 x i1> @test_vp_splice_v4i1_masked(<4 x i1> %va, <4 x i1> %vb, <4 x i1>
define <8 x i1> @test_vp_splice_v8i1(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -187,7 +187,7 @@ define <8 x i1> @test_vp_splice_v8i1(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %ev
define <8 x i1> @test_vp_splice_v8i1_negative_offset(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -212,7 +212,7 @@ define <8 x i1> @test_vp_splice_v8i1_negative_offset(<8 x i1> %va, <8 x i1> %vb,
define <8 x i1> @test_vp_splice_v8i1_masked(<8 x i1> %va, <8 x i1> %vb, <8 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -238,7 +238,7 @@ define <8 x i1> @test_vp_splice_v8i1_masked(<8 x i1> %va, <8 x i1> %vb, <8 x i1>
define <16 x i1> @test_vp_splice_v16i1(<16 x i1> %va, <16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -263,7 +263,7 @@ define <16 x i1> @test_vp_splice_v16i1(<16 x i1> %va, <16 x i1> %vb, i32 zeroext
define <16 x i1> @test_vp_splice_v16i1_negative_offset(<16 x i1> %va, <16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -288,7 +288,7 @@ define <16 x i1> @test_vp_splice_v16i1_negative_offset(<16 x i1> %va, <16 x i1>
define <16 x i1> @test_vp_splice_v16i1_masked(<16 x i1> %va, <16 x i1> %vb, <16 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
index 39e77b124aee28..1605fe0c4c7f55 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
@@ -13,7 +13,7 @@ declare <vscale x 64 x i1> @llvm.experimental.vp.splice.nxv64i1(<vscale x 64 x i
define <vscale x 1 x i1> @test_vp_splice_nxv1i1(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -38,7 +38,7 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1(<vscale x 1 x i1> %va, <vscale x
define <vscale x 1 x i1> @test_vp_splice_nxv1i1_negative_offset(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -63,7 +63,7 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1_negative_offset(<vscale x 1 x i1
define <vscale x 1 x i1> @test_vp_splice_nxv1i1_masked(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, <vscale x 1 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -89,7 +89,7 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1_masked(<vscale x 1 x i1> %va, <v
define <vscale x 2 x i1> @test_vp_splice_nxv2i1(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -114,7 +114,7 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1(<vscale x 2 x i1> %va, <vscale x
define <vscale x 2 x i1> @test_vp_splice_nxv2i1_negative_offset(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -139,7 +139,7 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1_negative_offset(<vscale x 2 x i1
define <vscale x 2 x i1> @test_vp_splice_nxv2i1_masked(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, <vscale x 2 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -165,7 +165,7 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1_masked(<vscale x 2 x i1> %va, <v
define <vscale x 4 x i1> @test_vp_splice_nxv4i1(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -190,7 +190,7 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1(<vscale x 4 x i1> %va, <vscale x
define <vscale x 4 x i1> @test_vp_splice_nxv4i1_negative_offset(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -215,7 +215,7 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1_negative_offset(<vscale x 4 x i1
define <vscale x 4 x i1> @test_vp_splice_nxv4i1_masked(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, <vscale x 4 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -241,7 +241,7 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1_masked(<vscale x 4 x i1> %va, <v
define <vscale x 8 x i1> @test_vp_splice_nxv8i1(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -266,7 +266,7 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1(<vscale x 8 x i1> %va, <vscale x
define <vscale x 8 x i1> @test_vp_splice_nxv8i1_negative_offset(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -291,7 +291,7 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1_negative_offset(<vscale x 8 x i1
define <vscale x 8 x i1> @test_vp_splice_nxv8i1_masked(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, <vscale x 8 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -317,7 +317,7 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1_masked(<vscale x 8 x i1> %va, <v
define <vscale x 16 x i1> @test_vp_splice_nxv16i1(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -342,7 +342,7 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1(<vscale x 16 x i1> %va, <vscal
define <vscale x 16 x i1> @test_vp_splice_nxv16i1_negative_offset(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -367,7 +367,7 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1_negative_offset(<vscale x 16 x
define <vscale x 16 x i1> @test_vp_splice_nxv16i1_masked(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, <vscale x 16 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -394,7 +394,7 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1_masked(<vscale x 16 x i1> %va,
define <vscale x 32 x i1> @test_vp_splice_nxv32i1(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -419,7 +419,7 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1(<vscale x 32 x i1> %va, <vscal
define <vscale x 32 x i1> @test_vp_splice_nxv32i1_negative_offset(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -444,7 +444,7 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1_negative_offset(<vscale x 32 x
define <vscale x 32 x i1> @test_vp_splice_nxv32i1_masked(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, <vscale x 32 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -471,7 +471,7 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1_masked(<vscale x 32 x i1> %va,
define <vscale x 64 x i1> @test_vp_splice_nxv64i1(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -496,7 +496,7 @@ define <vscale x 64 x i1> @test_vp_splice_nxv64i1(<vscale x 64 x i1> %va, <vscal
define <vscale x 64 x i1> @test_vp_splice_nxv64i1_negative_offset(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -521,7 +521,7 @@ define <vscale x 64 x i1> @test_vp_splice_nxv64i1_negative_offset(<vscale x 64 x
define <vscale x 64 x i1> @test_vp_splice_nxv64i1_masked(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, <vscale x 64 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpload.ll b/llvm/test/CodeGen/RISCV/rvv/vpload.ll
index 74f642829e0d09..0844180e496126 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpload.ll
@@ -561,7 +561,7 @@ declare <vscale x 16 x double> @llvm.vector.extract.nxv16f64(<vscale x 17 x doub
define <vscale x 16 x double> @vpload_nxv17f64(ptr %ptr, ptr %out, <vscale x 17 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpload_nxv17f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v8, v0
; CHECK-NEXT: csrr a3, vlenb
; CHECK-NEXT: slli a5, a3, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpstore.ll b/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
index 35f922734c57a8..7e7da529bf3d73 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
@@ -468,7 +468,7 @@ define void @vpstore_nxv17f64(<vscale x 17 x double> %val, ptr %ptr, <vscale x 1
; CHECK-NEXT: slli a3, a3, 3
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v24, v0
; CHECK-NEXT: addi a3, sp, 16
; CHECK-NEXT: vs8r.v v16, (a3) # Unknown-size Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
index 2f1c3d80ef88d7..f2234c69e14315 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
@@ -23,7 +23,7 @@ declare i1 @llvm.vp.reduce.or.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>, i
define zeroext i1 @vpreduce_or_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -40,7 +40,7 @@ declare i1 @llvm.vp.reduce.xor.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_xor_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -73,7 +73,7 @@ declare i1 @llvm.vp.reduce.or.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>, i
define zeroext i1 @vpreduce_or_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -90,7 +90,7 @@ declare i1 @llvm.vp.reduce.xor.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_xor_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -123,7 +123,7 @@ declare i1 @llvm.vp.reduce.or.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>, i
define zeroext i1 @vpreduce_or_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -140,7 +140,7 @@ declare i1 @llvm.vp.reduce.xor.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_xor_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -173,7 +173,7 @@ declare i1 @llvm.vp.reduce.or.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>, i
define zeroext i1 @vpreduce_or_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -190,7 +190,7 @@ declare i1 @llvm.vp.reduce.xor.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_xor_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -223,7 +223,7 @@ declare i1 @llvm.vp.reduce.or.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1>
define zeroext i1 @vpreduce_or_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -240,7 +240,7 @@ declare i1 @llvm.vp.reduce.xor.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1
define zeroext i1 @vpreduce_xor_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -273,7 +273,7 @@ declare i1 @llvm.vp.reduce.or.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1>
define zeroext i1 @vpreduce_or_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -290,7 +290,7 @@ declare i1 @llvm.vp.reduce.xor.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1
define zeroext i1 @vpreduce_xor_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -307,7 +307,7 @@ declare i1 @llvm.vp.reduce.or.nxv40i1(i1, <vscale x 40 x i1>, <vscale x 40 x i1>
define zeroext i1 @vpreduce_or_nxv40i1(i1 zeroext %s, <vscale x 40 x i1> %v, <vscale x 40 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv40i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -340,7 +340,7 @@ declare i1 @llvm.vp.reduce.or.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1>
define zeroext i1 @vpreduce_or_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -357,7 +357,7 @@ declare i1 @llvm.vp.reduce.xor.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1
define zeroext i1 @vpreduce_xor_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -374,7 +374,7 @@ declare i1 @llvm.vp.reduce.or.nxv128i1(i1, <vscale x 128 x i1>, <vscale x 128 x
define zeroext i1 @vpreduce_or_nxv128i1(i1 zeroext %s, <vscale x 128 x i1> %v, <vscale x 128 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv128i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: csrr a2, vlenb
; CHECK-NEXT: slli a2, a2, 3
@@ -406,7 +406,7 @@ declare i1 @llvm.vp.reduce.add.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_add_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -423,7 +423,7 @@ declare i1 @llvm.vp.reduce.add.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_add_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -440,7 +440,7 @@ declare i1 @llvm.vp.reduce.add.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_add_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -457,7 +457,7 @@ declare i1 @llvm.vp.reduce.add.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_add_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -474,7 +474,7 @@ declare i1 @llvm.vp.reduce.add.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1
define zeroext i1 @vpreduce_add_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -491,7 +491,7 @@ declare i1 @llvm.vp.reduce.add.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1
define zeroext i1 @vpreduce_add_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -508,7 +508,7 @@ declare i1 @llvm.vp.reduce.add.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1
define zeroext i1 @vpreduce_add_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -638,7 +638,7 @@ declare i1 @llvm.vp.reduce.smin.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_smin_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -655,7 +655,7 @@ declare i1 @llvm.vp.reduce.smin.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_smin_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -672,7 +672,7 @@ declare i1 @llvm.vp.reduce.smin.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_smin_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -689,7 +689,7 @@ declare i1 @llvm.vp.reduce.smin.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_smin_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -706,7 +706,7 @@ declare i1 @llvm.vp.reduce.smin.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i
define zeroext i1 @vpreduce_smin_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -723,7 +723,7 @@ declare i1 @llvm.vp.reduce.smin.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i
define zeroext i1 @vpreduce_smin_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -740,7 +740,7 @@ declare i1 @llvm.vp.reduce.smin.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i
define zeroext i1 @vpreduce_smin_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
@@ -757,7 +757,7 @@ declare i1 @llvm.vp.reduce.umax.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_umax_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -774,7 +774,7 @@ declare i1 @llvm.vp.reduce.umax.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_umax_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -791,7 +791,7 @@ declare i1 @llvm.vp.reduce.umax.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_umax_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -808,7 +808,7 @@ declare i1 @llvm.vp.reduce.umax.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_umax_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
@@ -825,7 +825,7 @@ declare i1 @llvm.vp.reduce.umax.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i
define zeroext i1 @vpreduce_umax_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
@@ -842,7 +842,7 @@ declare i1 @llvm.vp.reduce.umax.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i
define zeroext i1 @vpreduce_umax_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
@@ -859,7 +859,7 @@ declare i1 @llvm.vp.reduce.umax.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i
define zeroext i1 @vpreduce_umax_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll
index b81986a13bd670..b793531e340694 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-bf16.ll
@@ -126,7 +126,7 @@ define <vscale x 8 x bfloat> @vmerge_truelhs_nxv8bf16_0(<vscale x 8 x bfloat> %v
define <vscale x 8 x bfloat> @vmerge_falselhs_nxv8bf16_0(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb) {
; CHECK-LABEL: vmerge_falselhs_nxv8bf16_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%vc = select <vscale x 8 x i1> zeroinitializer, <vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll
index a2b4f308b6a323..be2fc6955294d5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-fp.ll
@@ -175,7 +175,7 @@ define <vscale x 8 x half> @vmerge_truelhs_nxv8f16_0(<vscale x 8 x half> %va, <v
define <vscale x 8 x half> @vmerge_falselhs_nxv8f16_0(<vscale x 8 x half> %va, <vscale x 8 x half> %vb) {
; CHECK-LABEL: vmerge_falselhs_nxv8f16_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: ret
%vc = select <vscale x 8 x i1> zeroinitializer, <vscale x 8 x half> %va, <vscale x 8 x half> %vb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll
index 8179374cb8c7f4..4ec9e344e62787 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-int.ll
@@ -803,7 +803,7 @@ define <vscale x 8 x i64> @vmerge_truelhs_nxv8i64_0(<vscale x 8 x i64> %va, <vsc
define <vscale x 8 x i64> @vmerge_falselhs_nxv8i64_0(<vscale x 8 x i64> %va, <vscale x 8 x i64> %vb) {
; CHECK-LABEL: vmerge_falselhs_nxv8i64_0:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv8r.v v8, v16
; CHECK-NEXT: ret
%vc = select <vscale x 8 x i1> zeroinitializer, <vscale x 8 x i64> %va, <vscale x 8 x i64> %vb
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
index a7c521cd464369..d4ebe27420d7b3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
@@ -835,7 +835,7 @@ define <vscale x 2 x i1> @select_cond_x_cond(<vscale x 2 x i1> %x, <vscale x 2 x
define <vscale x 2 x i1> @select_undef_T_F(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_undef_T_F:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> poison, <vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 %evl)
@@ -853,7 +853,7 @@ define <vscale x 2 x i1> @select_undef_undef_F(<vscale x 2 x i1> %x, i32 zeroext
define <vscale x 2 x i1> @select_unknown_undef_F(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_unknown_undef_F:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> %x, <vscale x 2 x i1> undef, <vscale x 2 x i1> %y, i32 %evl)
@@ -863,7 +863,7 @@ define <vscale x 2 x i1> @select_unknown_undef_F(<vscale x 2 x i1> %x, <vscale x
define <vscale x 2 x i1> @select_unknown_T_undef(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_unknown_T_undef:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, <vscale x 2 x i1> poison, i32 %evl)
@@ -873,7 +873,7 @@ define <vscale x 2 x i1> @select_unknown_T_undef(<vscale x 2 x i1> %x, <vscale x
define <vscale x 2 x i1> @select_false_T_F(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, <vscale x 2 x i1> %z, i32 zeroext %evl) {
; CHECK-LABEL: select_false_T_F:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v9
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> zeroinitializer, <vscale x 2 x i1> %y, <vscale x 2 x i1> %z, i32 %evl)
@@ -883,7 +883,7 @@ define <vscale x 2 x i1> @select_false_T_F(<vscale x 2 x i1> %x, <vscale x 2 x i
define <vscale x 2 x i1> @select_unknown_T_T(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, i32 zeroext %evl) {
; CHECK-LABEL: select_unknown_T_T:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
%a = call <vscale x 2 x i1> @llvm.vp.select.nxv2i1(<vscale x 2 x i1> %x, <vscale x 2 x i1> %y, <vscale x 2 x i1> %y, i32 %evl)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
index 8186c67bf93e07..d8620d06582619 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-O0.ll
@@ -108,7 +108,7 @@ entry:
define <vscale x 1 x double> @intrinsic_same_avl_reg(i64 %avl, <vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: intrinsic_same_avl_reg:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: vsetvli a0, a0, e32, mf2, ta, ma
; CHECK-NEXT: # implicit-def: $v9
@@ -136,7 +136,7 @@ entry:
define <vscale x 1 x double> @intrinsic_diff_avl_reg(i64 %avl, i64 %avl2, <vscale x 1 x double> %a, <vscale x 1 x double> %b) nounwind {
; CHECK-LABEL: intrinsic_diff_avl_reg:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 0, e8, m1, ta, ma
+; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: vsetvli a0, a0, e32, mf2, ta, ma
; CHECK-NEXT: # implicit-def: $v9
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll
index 2ca2803f3c746a..8b48dc43eca29a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert.ll
@@ -377,7 +377,7 @@ entry:
define <vscale x 1 x double> @test19(<vscale x 1 x double> %a, double %b) nounwind {
; CHECK-LABEL: test19:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 2, e64, m8, tu, ma
+; CHECK-NEXT: vsetivli zero, 1, e64, m8, tu, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vfmv.s.f v9, fa0
; CHECK-NEXT: vsetvli a0, zero, e64, m1, ta, ma
>From a0e2911231f859f0dba8fa7d9f3ae2ba2f414cfe Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Tue, 3 Dec 2024 02:14:27 +0800
Subject: [PATCH 5/9] Update tests after rebasing on top of #118285
---
llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll | 58 +-
.../RISCV/rvv/fixed-vectors-bitreverse-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-ceil-vp.ll | 41 +-
.../RISCV/rvv/fixed-vectors-floor-vp.ll | 41 +-
.../RISCV/rvv/fixed-vectors-fmaximum-vp.ll | 36 +-
.../RISCV/rvv/fixed-vectors-fminimum-vp.ll | 36 +-
.../RISCV/rvv/fixed-vectors-nearbyint-vp.ll | 41 +-
.../rvv/fixed-vectors-reduction-mask-vp.ll | 87 +--
.../RISCV/rvv/fixed-vectors-rint-vp.ll | 41 +-
.../RISCV/rvv/fixed-vectors-round-vp.ll | 41 +-
.../RISCV/rvv/fixed-vectors-roundeven-vp.ll | 41 +-
.../RISCV/rvv/fixed-vectors-roundtozero-vp.ll | 41 +-
.../fixed-vectors-strided-load-store-asm.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/floor-vp.ll | 58 +-
llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll | 42 +-
llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll | 42 +-
llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll | 9 +-
llvm/test/CodeGen/RISCV/rvv/masked-tama.ll | 9 +-
llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll | 58 +-
llvm/test/CodeGen/RISCV/rvv/rint-vp.ll | 58 +-
llvm/test/CodeGen/RISCV/rvv/round-vp.ll | 58 +-
llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll | 58 +-
llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll | 58 +-
.../RISCV/rvv/rvv-peephole-vmerge-vops.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vcpop.ll | 21 +-
.../RISCV/rvv/vector-reassociations.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/vfirst.ll | 21 +-
llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll | 9 +-
llvm/test/CodeGen/RISCV/rvv/vl-opt.ll | 3 +-
.../CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll | 495 ++++++------------
.../CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll | 495 ++++++------------
llvm/test/CodeGen/RISCV/rvv/vmfeq.ll | 72 +--
llvm/test/CodeGen/RISCV/rvv/vmfge.ll | 72 +--
llvm/test/CodeGen/RISCV/rvv/vmfgt.ll | 72 +--
llvm/test/CodeGen/RISCV/rvv/vmfle.ll | 72 +--
llvm/test/CodeGen/RISCV/rvv/vmflt.ll | 72 +--
llvm/test/CodeGen/RISCV/rvv/vmfne.ll | 72 +--
llvm/test/CodeGen/RISCV/rvv/vmsbf.ll | 21 +-
llvm/test/CodeGen/RISCV/rvv/vmseq.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsge.ll | 169 ++----
llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll | 166 ++----
llvm/test/CodeGen/RISCV/rvv/vmsgt.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsif.ll | 21 +-
llvm/test/CodeGen/RISCV/rvv/vmsle.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsleu.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmslt.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsltu.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsne.ll | 162 ++----
llvm/test/CodeGen/RISCV/rvv/vmsof.ll | 21 +-
.../CodeGen/RISCV/rvv/vmv.v.v-peephole.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll | 6 +-
.../RISCV/rvv/vp-splice-mask-fixed-vectors.ll | 36 +-
.../RISCV/rvv/vp-splice-mask-vectors.ll | 63 +--
.../CodeGen/RISCV/rvv/vreductions-mask-vp.ll | 108 ++--
57 files changed, 1515 insertions(+), 2858 deletions(-)
diff --git a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
index 48103170cba9a2..a81f0ac982f569 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
@@ -649,11 +649,10 @@ declare <vscale x 8 x half> @llvm.vp.ceil.nxv8f16(<vscale x 8 x half>, <vscale x
define <vscale x 8 x half> @vp_ceil_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -736,11 +735,10 @@ declare <vscale x 16 x half> @llvm.vp.ceil.nxv16f16(<vscale x 16 x half>, <vscal
define <vscale x 16 x half> @vp_ceil_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -823,11 +821,10 @@ declare <vscale x 32 x half> @llvm.vp.ceil.nxv32f16(<vscale x 32 x half>, <vscal
define <vscale x 32 x half> @vp_ceil_vv_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_vv_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1071,9 +1068,8 @@ declare <vscale x 4 x float> @llvm.vp.ceil.nxv4f32(<vscale x 4 x float>, <vscale
define <vscale x 4 x float> @vp_ceil_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1116,9 +1112,8 @@ declare <vscale x 8 x float> @llvm.vp.ceil.nxv8f32(<vscale x 8 x float>, <vscale
define <vscale x 8 x float> @vp_ceil_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1161,9 +1156,8 @@ declare <vscale x 16 x float> @llvm.vp.ceil.nxv16f32(<vscale x 16 x float>, <vsc
define <vscale x 16 x float> @vp_ceil_vv_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1248,11 +1242,10 @@ declare <vscale x 2 x double> @llvm.vp.ceil.nxv2f64(<vscale x 2 x double>, <vsca
define <vscale x 2 x double> @vp_ceil_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1293,11 +1286,10 @@ declare <vscale x 4 x double> @llvm.vp.ceil.nxv4f64(<vscale x 4 x double>, <vsca
define <vscale x 4 x double> @vp_ceil_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1338,11 +1330,10 @@ declare <vscale x 7 x double> @llvm.vp.ceil.nxv7f64(<vscale x 7 x double>, <vsca
define <vscale x 7 x double> @vp_ceil_vv_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1383,11 +1374,10 @@ declare <vscale x 8 x double> @llvm.vp.ceil.nxv8f64(<vscale x 8 x double>, <vsca
define <vscale x 8 x double> @vp_ceil_vv_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_vv_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
index d4aa05156b66cd..5ea4924468595d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
@@ -1659,7 +1659,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
-; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vmv8r.v v24, v8
; RV32-NEXT: lui a2, 1044480
; RV32-NEXT: lui a3, 61681
@@ -1678,7 +1678,6 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV32-NEXT: sw a3, 36(sp)
; RV32-NEXT: addi a3, sp, 16
; RV32-NEXT: addi a4, a5, 1365
-; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsll.vx v16, v8, a1, v0.t
; RV32-NEXT: addi a5, a6, -256
; RV32-NEXT: sw a4, 24(sp)
@@ -2056,7 +2055,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x18, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 24 * vlenb
-; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vmv8r.v v24, v8
; RV32-NEXT: lui a2, 1044480
; RV32-NEXT: lui a3, 61681
@@ -2075,7 +2074,6 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV32-NEXT: sw a3, 36(sp)
; RV32-NEXT: addi a3, sp, 16
; RV32-NEXT: addi a4, a5, 1365
-; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsll.vx v16, v8, a1, v0.t
; RV32-NEXT: addi a5, a6, -256
; RV32-NEXT: sw a4, 24(sp)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
index ca80a314c680a9..5fe55583f314cf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
@@ -261,11 +261,10 @@ declare <16 x half> @llvm.vp.ceil.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_ceil_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_ceil_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI6_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -432,9 +431,8 @@ declare <8 x float> @llvm.vp.ceil.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_ceil_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -477,9 +475,8 @@ declare <16 x float> @llvm.vp.ceil.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_ceil_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -564,11 +561,10 @@ declare <4 x double> @llvm.vp.ceil.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_ceil_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -609,11 +605,10 @@ declare <8 x double> @llvm.vp.ceil.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_ceil_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -654,11 +649,10 @@ declare <15 x double> @llvm.vp.ceil.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_ceil_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -699,11 +693,10 @@ declare <16 x double> @llvm.vp.ceil.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_ceil_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_ceil_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
index 9c8d9879b2f07f..49255320c40a62 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
@@ -261,11 +261,10 @@ declare <16 x half> @llvm.vp.floor.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_floor_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI6_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -432,9 +431,8 @@ declare <8 x float> @llvm.vp.floor.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_floor_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -477,9 +475,8 @@ declare <16 x float> @llvm.vp.floor.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_floor_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -564,11 +561,10 @@ declare <4 x double> @llvm.vp.floor.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_floor_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -609,11 +605,10 @@ declare <8 x double> @llvm.vp.floor.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_floor_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -654,11 +649,10 @@ declare <15 x double> @llvm.vp.floor.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_floor_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -699,11 +693,10 @@ declare <16 x double> @llvm.vp.floor.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_floor_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
index adeaf2390f1ef3..11f92555f56cda 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
@@ -13,9 +13,8 @@ declare <2 x half> @llvm.vp.maximum.v2f16(<2 x half>, <2 x half>, <2 x i1>, i32)
define <2 x half> @vfmax_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -84,9 +83,8 @@ declare <4 x half> @llvm.vp.maximum.v4f16(<4 x half>, <4 x half>, <4 x i1>, i32)
define <4 x half> @vfmax_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -155,9 +153,8 @@ declare <8 x half> @llvm.vp.maximum.v8f16(<8 x half>, <8 x half>, <8 x i1>, i32)
define <8 x half> @vfmax_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -228,9 +225,8 @@ declare <16 x half> @llvm.vp.maximum.v16f16(<16 x half>, <16 x half>, <16 x i1>,
define <16 x half> @vfmax_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v13
; ZVFH-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -303,9 +299,8 @@ declare <2 x float> @llvm.vp.maximum.v2f32(<2 x float>, <2 x float>, <2 x i1>, i
define <2 x float> @vfmax_vv_v2f32(<2 x float> %va, <2 x float> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -337,9 +332,8 @@ declare <4 x float> @llvm.vp.maximum.v4f32(<4 x float>, <4 x float>, <4 x i1>, i
define <4 x float> @vfmax_vv_v4f32(<4 x float> %va, <4 x float> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -371,9 +365,8 @@ declare <8 x float> @llvm.vp.maximum.v8f32(<8 x float>, <8 x float>, <8 x i1>, i
define <8 x float> @vfmax_vv_v8f32(<8 x float> %va, <8 x float> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -407,9 +400,8 @@ declare <16 x float> @llvm.vp.maximum.v16f32(<16 x float>, <16 x float>, <16 x i
define <16 x float> @vfmax_vv_v16f32(<16 x float> %va, <16 x float> %vb, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -443,9 +435,8 @@ declare <2 x double> @llvm.vp.maximum.v2f64(<2 x double>, <2 x double>, <2 x i1>
define <2 x double> @vfmax_vv_v2f64(<2 x double> %va, <2 x double> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -477,9 +468,8 @@ declare <4 x double> @llvm.vp.maximum.v4f64(<4 x double>, <4 x double>, <4 x i1>
define <4 x double> @vfmax_vv_v4f64(<4 x double> %va, <4 x double> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -513,9 +503,8 @@ declare <8 x double> @llvm.vp.maximum.v8f64(<8 x double>, <8 x double>, <8 x i1>
define <8 x double> @vfmax_vv_v8f64(<8 x double> %va, <8 x double> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -555,9 +544,8 @@ define <16 x double> @vfmax_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v25
; CHECK-NEXT: vmerge.vvm v24, v8, v16, v0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
index a3795581decd2f..3fb586b67a21bc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
@@ -13,9 +13,8 @@ declare <2 x half> @llvm.vp.minimum.v2f16(<2 x half>, <2 x half>, <2 x i1>, i32)
define <2 x half> @vfmin_vv_v2f16(<2 x half> %va, <2 x half> %vb, <2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -84,9 +83,8 @@ declare <4 x half> @llvm.vp.minimum.v4f16(<4 x half>, <4 x half>, <4 x i1>, i32)
define <4 x half> @vfmin_vv_v4f16(<4 x half> %va, <4 x half> %vb, <4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -155,9 +153,8 @@ declare <8 x half> @llvm.vp.minimum.v8f16(<8 x half>, <8 x half>, <8 x i1>, i32)
define <8 x half> @vfmin_vv_v8f16(<8 x half> %va, <8 x half> %vb, <8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -228,9 +225,8 @@ declare <16 x half> @llvm.vp.minimum.v16f16(<16 x half>, <16 x half>, <16 x i1>,
define <16 x half> @vfmin_vv_v16f16(<16 x half> %va, <16 x half> %vb, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v13
; ZVFH-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -303,9 +299,8 @@ declare <2 x float> @llvm.vp.minimum.v2f32(<2 x float>, <2 x float>, <2 x i1>, i
define <2 x float> @vfmin_vv_v2f32(<2 x float> %va, <2 x float> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -337,9 +332,8 @@ declare <4 x float> @llvm.vp.minimum.v4f32(<4 x float>, <4 x float>, <4 x i1>, i
define <4 x float> @vfmin_vv_v4f32(<4 x float> %va, <4 x float> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -371,9 +365,8 @@ declare <8 x float> @llvm.vp.minimum.v8f32(<8 x float>, <8 x float>, <8 x i1>, i
define <8 x float> @vfmin_vv_v8f32(<8 x float> %va, <8 x float> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -407,9 +400,8 @@ declare <16 x float> @llvm.vp.minimum.v16f32(<16 x float>, <16 x float>, <16 x i
define <16 x float> @vfmin_vv_v16f32(<16 x float> %va, <16 x float> %vb, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -443,9 +435,8 @@ declare <2 x double> @llvm.vp.minimum.v2f64(<2 x double>, <2 x double>, <2 x i1>
define <2 x double> @vfmin_vv_v2f64(<2 x double> %va, <2 x double> %vb, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -477,9 +468,8 @@ declare <4 x double> @llvm.vp.minimum.v4f64(<4 x double>, <4 x double>, <4 x i1>
define <4 x double> @vfmin_vv_v4f64(<4 x double> %va, <4 x double> %vb, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -513,9 +503,8 @@ declare <8 x double> @llvm.vp.minimum.v8f64(<8 x double>, <8 x double>, <8 x i1>
define <8 x double> @vfmin_vv_v8f64(<8 x double> %va, <8 x double> %vb, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -555,9 +544,8 @@ define <16 x double> @vfmin_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v25
; CHECK-NEXT: vmerge.vvm v24, v8, v16, v0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
index cecb47d16ce4b4..80a9143d1ad8bc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-nearbyint-vp.ll
@@ -135,11 +135,10 @@ declare <16 x half> @llvm.vp.nearbyint.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_nearbyint_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI6_0)
-; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI6_0)
+; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -264,9 +263,8 @@ declare <8 x float> @llvm.vp.nearbyint.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_nearbyint_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -309,9 +307,8 @@ declare <16 x float> @llvm.vp.nearbyint.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_nearbyint_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -396,11 +393,10 @@ declare <4 x double> @llvm.vp.nearbyint.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_nearbyint_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -441,11 +437,10 @@ declare <8 x double> @llvm.vp.nearbyint.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_nearbyint_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -486,11 +481,10 @@ declare <15 x double> @llvm.vp.nearbyint.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_nearbyint_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -531,11 +525,10 @@ declare <16 x double> @llvm.vp.nearbyint.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_nearbyint_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
index 05b4150e8c328f..276f6b077931bf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-mask-vp.ll
@@ -23,10 +23,9 @@ declare i1 @llvm.vp.reduce.or.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_or_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -40,10 +39,9 @@ declare i1 @llvm.vp.reduce.xor.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_xor_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -73,10 +71,9 @@ declare i1 @llvm.vp.reduce.or.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_or_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -90,10 +87,9 @@ declare i1 @llvm.vp.reduce.xor.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_xor_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -123,10 +119,9 @@ declare i1 @llvm.vp.reduce.or.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_or_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -140,10 +135,9 @@ declare i1 @llvm.vp.reduce.xor.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_xor_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -173,10 +167,9 @@ declare i1 @llvm.vp.reduce.or.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_or_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -190,10 +183,9 @@ declare i1 @llvm.vp.reduce.xor.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_xor_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -274,10 +266,9 @@ declare i1 @llvm.vp.reduce.or.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_or_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -291,10 +282,9 @@ declare i1 @llvm.vp.reduce.xor.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_xor_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -308,10 +298,9 @@ declare i1 @llvm.vp.reduce.add.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_add_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -325,10 +314,9 @@ declare i1 @llvm.vp.reduce.add.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_add_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -342,10 +330,9 @@ declare i1 @llvm.vp.reduce.add.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_add_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -359,10 +346,9 @@ declare i1 @llvm.vp.reduce.add.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_add_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -376,10 +362,9 @@ declare i1 @llvm.vp.reduce.add.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_add_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -505,10 +490,9 @@ declare i1 @llvm.vp.reduce.smin.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_smin_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -522,10 +506,9 @@ declare i1 @llvm.vp.reduce.smin.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_smin_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -539,10 +522,9 @@ declare i1 @llvm.vp.reduce.smin.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_smin_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -556,10 +538,9 @@ declare i1 @llvm.vp.reduce.smin.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_smin_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -573,10 +554,9 @@ declare i1 @llvm.vp.reduce.smin.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_smin_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -590,10 +570,9 @@ declare i1 @llvm.vp.reduce.smin.v32i1(i1, <32 x i1>, <32 x i1>, i32)
define zeroext i1 @vpreduce_smin_v32i1(i1 zeroext %s, <32 x i1> %v, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -607,10 +586,9 @@ declare i1 @llvm.vp.reduce.smin.v64i1(i1, <64 x i1>, <64 x i1>, i32)
define zeroext i1 @vpreduce_smin_v64i1(i1 zeroext %s, <64 x i1> %v, <64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_v64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -624,10 +602,9 @@ declare i1 @llvm.vp.reduce.umax.v1i1(i1, <1 x i1>, <1 x i1>, i32)
define zeroext i1 @vpreduce_umax_v1i1(i1 zeroext %s, <1 x i1> %v, <1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -641,10 +618,9 @@ declare i1 @llvm.vp.reduce.umax.v2i1(i1, <2 x i1>, <2 x i1>, i32)
define zeroext i1 @vpreduce_umax_v2i1(i1 zeroext %s, <2 x i1> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -658,10 +634,9 @@ declare i1 @llvm.vp.reduce.umax.v4i1(i1, <4 x i1>, <4 x i1>, i32)
define zeroext i1 @vpreduce_umax_v4i1(i1 zeroext %s, <4 x i1> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -675,10 +650,9 @@ declare i1 @llvm.vp.reduce.umax.v8i1(i1, <8 x i1>, <8 x i1>, i32)
define zeroext i1 @vpreduce_umax_v8i1(i1 zeroext %s, <8 x i1> %v, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -692,10 +666,9 @@ declare i1 @llvm.vp.reduce.umax.v16i1(i1, <16 x i1>, <16 x i1>, i32)
define zeroext i1 @vpreduce_umax_v16i1(i1 zeroext %s, <16 x i1> %v, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -709,10 +682,9 @@ declare i1 @llvm.vp.reduce.umax.v32i1(i1, <32 x i1>, <32 x i1>, i32)
define zeroext i1 @vpreduce_umax_v32i1(i1 zeroext %s, <32 x i1> %v, <32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -726,10 +698,9 @@ declare i1 @llvm.vp.reduce.umax.v64i1(i1, <64 x i1>, <64 x i1>, i32)
define zeroext i1 @vpreduce_umax_v64i1(i1 zeroext %s, <64 x i1> %v, <64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
index f92bd4421e769e..266772d36ee9cd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
@@ -123,11 +123,10 @@ declare <16 x half> @llvm.vp.rint.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_rint_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI6_0)
-; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI6_0)
+; CHECK-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -240,9 +239,8 @@ declare <8 x float> @llvm.vp.rint.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_rint_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -281,9 +279,8 @@ declare <16 x float> @llvm.vp.rint.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_rint_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -360,11 +357,10 @@ declare <4 x double> @llvm.vp.rint.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_rint_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -401,11 +397,10 @@ declare <8 x double> @llvm.vp.rint.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_rint_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -442,11 +437,10 @@ declare <15 x double> @llvm.vp.rint.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_rint_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -483,11 +477,10 @@ declare <16 x double> @llvm.vp.rint.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_rint_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
index e94734d66f1d56..232a8a4827cb1f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
@@ -261,11 +261,10 @@ declare <16 x half> @llvm.vp.round.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_round_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI6_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -432,9 +431,8 @@ declare <8 x float> @llvm.vp.round.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_round_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -477,9 +475,8 @@ declare <16 x float> @llvm.vp.round.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_round_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -564,11 +561,10 @@ declare <4 x double> @llvm.vp.round.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_round_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -609,11 +605,10 @@ declare <8 x double> @llvm.vp.round.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_round_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -654,11 +649,10 @@ declare <15 x double> @llvm.vp.round.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_round_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -699,11 +693,10 @@ declare <16 x double> @llvm.vp.round.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_round_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
index 8cbf21c07e7541..7c80c037403c2e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
@@ -261,11 +261,10 @@ declare <16 x half> @llvm.vp.roundeven.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_roundeven_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI6_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -432,9 +431,8 @@ declare <8 x float> @llvm.vp.roundeven.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_roundeven_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -477,9 +475,8 @@ declare <16 x float> @llvm.vp.roundeven.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_roundeven_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -564,11 +561,10 @@ declare <4 x double> @llvm.vp.roundeven.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_roundeven_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -609,11 +605,10 @@ declare <8 x double> @llvm.vp.roundeven.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_roundeven_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -654,11 +649,10 @@ declare <15 x double> @llvm.vp.roundeven.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_roundeven_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -699,11 +693,10 @@ declare <16 x double> @llvm.vp.roundeven.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_roundeven_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
index 98071d83e0e522..65a4725267cd3d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
@@ -261,11 +261,10 @@ declare <16 x half> @llvm.vp.roundtozero.v16f16(<16 x half>, <16 x i1>, i32)
define <16 x half> @vp_roundtozero_v16f16(<16 x half> %va, <16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_v16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI6_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI6_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI6_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -432,9 +431,8 @@ declare <8 x float> @llvm.vp.roundtozero.v8f32(<8 x float>, <8 x i1>, i32)
define <8 x float> @vp_roundtozero_v8f32(<8 x float> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -477,9 +475,8 @@ declare <16 x float> @llvm.vp.roundtozero.v16f32(<16 x float>, <16 x i1>, i32)
define <16 x float> @vp_roundtozero_v16f32(<16 x float> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -564,11 +561,10 @@ declare <4 x double> @llvm.vp.roundtozero.v4f64(<4 x double>, <4 x i1>, i32)
define <4 x double> @vp_roundtozero_v4f64(<4 x double> %va, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI18_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI18_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI18_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -609,11 +605,10 @@ declare <8 x double> @llvm.vp.roundtozero.v8f64(<8 x double>, <8 x i1>, i32)
define <8 x double> @vp_roundtozero_v8f64(<8 x double> %va, <8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI20_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI20_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI20_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -654,11 +649,10 @@ declare <15 x double> @llvm.vp.roundtozero.v15f64(<15 x double>, <15 x i1>, i32)
define <15 x double> @vp_roundtozero_v15f64(<15 x double> %va, <15 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v15f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI22_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI22_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI22_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -699,11 +693,10 @@ declare <16 x double> @llvm.vp.roundtozero.v16f64(<16 x double>, <16 x i1>, i32)
define <16 x double> @vp_roundtozero_v16f64(<16 x double> %va, <16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_v16f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI24_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI24_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI24_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
index 206ccab5523de0..29d9a8a9b060ce 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
@@ -62,9 +62,8 @@ define void @gather_masked(ptr noalias nocapture %A, ptr noalias nocapture reado
; CHECK-NEXT: li a4, 5
; CHECK-NEXT: .LBB1_1: # %vector.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetvli zero, a3, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vlse8.v v9, (a1), a4, v0.t
; CHECK-NEXT: vle8.v v10, (a0)
; CHECK-NEXT: vadd.vv v9, v10, v9
diff --git a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
index 9ed54b2ba965b3..74a00c655d5269 100644
--- a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
@@ -649,11 +649,10 @@ declare <vscale x 8 x half> @llvm.vp.floor.nxv8f16(<vscale x 8 x half>, <vscale
define <vscale x 8 x half> @vp_floor_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -736,11 +735,10 @@ declare <vscale x 16 x half> @llvm.vp.floor.nxv16f16(<vscale x 16 x half>, <vsca
define <vscale x 16 x half> @vp_floor_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -823,11 +821,10 @@ declare <vscale x 32 x half> @llvm.vp.floor.nxv32f16(<vscale x 32 x half>, <vsca
define <vscale x 32 x half> @vp_floor_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_floor_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1071,9 +1068,8 @@ declare <vscale x 4 x float> @llvm.vp.floor.nxv4f32(<vscale x 4 x float>, <vscal
define <vscale x 4 x float> @vp_floor_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1116,9 +1112,8 @@ declare <vscale x 8 x float> @llvm.vp.floor.nxv8f32(<vscale x 8 x float>, <vscal
define <vscale x 8 x float> @vp_floor_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1161,9 +1156,8 @@ declare <vscale x 16 x float> @llvm.vp.floor.nxv16f32(<vscale x 16 x float>, <vs
define <vscale x 16 x float> @vp_floor_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1248,11 +1242,10 @@ declare <vscale x 2 x double> @llvm.vp.floor.nxv2f64(<vscale x 2 x double>, <vsc
define <vscale x 2 x double> @vp_floor_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1293,11 +1286,10 @@ declare <vscale x 4 x double> @llvm.vp.floor.nxv4f64(<vscale x 4 x double>, <vsc
define <vscale x 4 x double> @vp_floor_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1338,11 +1330,10 @@ declare <vscale x 7 x double> @llvm.vp.floor.nxv7f64(<vscale x 7 x double>, <vsc
define <vscale x 7 x double> @vp_floor_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1383,11 +1374,10 @@ declare <vscale x 8 x double> @llvm.vp.floor.nxv8f64(<vscale x 8 x double>, <vsc
define <vscale x 8 x double> @vp_floor_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_floor_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
index c4041fc372de45..7649d60def111b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
@@ -569,9 +569,8 @@ declare <vscale x 1 x half> @llvm.vp.maximum.nxv1f16(<vscale x 1 x half>, <vscal
define <vscale x 1 x half> @vfmax_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv1f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -640,9 +639,8 @@ declare <vscale x 2 x half> @llvm.vp.maximum.nxv2f16(<vscale x 2 x half>, <vscal
define <vscale x 2 x half> @vfmax_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -711,9 +709,8 @@ declare <vscale x 4 x half> @llvm.vp.maximum.nxv4f16(<vscale x 4 x half>, <vscal
define <vscale x 4 x half> @vfmax_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -784,9 +781,8 @@ declare <vscale x 8 x half> @llvm.vp.maximum.nxv8f16(<vscale x 8 x half>, <vscal
define <vscale x 8 x half> @vfmax_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v13
; ZVFH-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -859,9 +855,8 @@ declare <vscale x 16 x half> @llvm.vp.maximum.nxv16f16(<vscale x 16 x half>, <vs
define <vscale x 16 x half> @vfmax_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmax_vv_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vmfeq.vv v17, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v17
; ZVFH-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -969,9 +964,8 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: slli a1, a1, 3
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vmfeq.vv v25, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v25
; ZVFH-NEXT: vmerge.vvm v24, v8, v16, v0
@@ -1286,9 +1280,8 @@ declare <vscale x 1 x float> @llvm.vp.maximum.nxv1f32(<vscale x 1 x float>, <vsc
define <vscale x 1 x float> @vfmax_vv_nxv1f32(<vscale x 1 x float> %va, <vscale x 1 x float> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -1320,9 +1313,8 @@ declare <vscale x 2 x float> @llvm.vp.maximum.nxv2f32(<vscale x 2 x float>, <vsc
define <vscale x 2 x float> @vfmax_vv_nxv2f32(<vscale x 2 x float> %va, <vscale x 2 x float> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -1354,9 +1346,8 @@ declare <vscale x 4 x float> @llvm.vp.maximum.nxv4f32(<vscale x 4 x float>, <vsc
define <vscale x 4 x float> @vfmax_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x float> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -1390,9 +1381,8 @@ declare <vscale x 8 x float> @llvm.vp.maximum.nxv8f32(<vscale x 8 x float>, <vsc
define <vscale x 8 x float> @vfmax_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x float> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -1426,9 +1416,8 @@ declare <vscale x 1 x double> @llvm.vp.maximum.nxv1f64(<vscale x 1 x double>, <v
define <vscale x 1 x double> @vfmax_vv_nxv1f64(<vscale x 1 x double> %va, <vscale x 1 x double> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv1f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -1460,9 +1449,8 @@ declare <vscale x 2 x double> @llvm.vp.maximum.nxv2f64(<vscale x 2 x double>, <v
define <vscale x 2 x double> @vfmax_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x double> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -1496,9 +1484,8 @@ declare <vscale x 4 x double> @llvm.vp.maximum.nxv4f64(<vscale x 4 x double>, <v
define <vscale x 4 x double> @vfmax_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x double> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmax_vv_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -1538,9 +1525,8 @@ define <vscale x 8 x double> @vfmax_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v25
; CHECK-NEXT: vmerge.vvm v24, v8, v16, v0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
index 467156ab024a6f..8e448fcda9c5d6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
@@ -569,9 +569,8 @@ declare <vscale x 1 x half> @llvm.vp.minimum.nxv1f16(<vscale x 1 x half>, <vscal
define <vscale x 1 x half> @vfmin_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv1f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -640,9 +639,8 @@ declare <vscale x 2 x half> @llvm.vp.minimum.nxv2f16(<vscale x 2 x half>, <vscal
define <vscale x 2 x half> @vfmin_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv2f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -711,9 +709,8 @@ declare <vscale x 4 x half> @llvm.vp.minimum.nxv4f16(<vscale x 4 x half>, <vscal
define <vscale x 4 x half> @vfmin_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv4f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
; ZVFH-NEXT: vmfeq.vv v0, v8, v8, v0.t
; ZVFH-NEXT: vmerge.vvm v11, v8, v9, v0
; ZVFH-NEXT: vmv1r.v v0, v10
@@ -784,9 +781,8 @@ declare <vscale x 8 x half> @llvm.vp.minimum.nxv8f16(<vscale x 8 x half>, <vscal
define <vscale x 8 x half> @vfmin_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
; ZVFH-NEXT: vmfeq.vv v13, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v13
; ZVFH-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -859,9 +855,8 @@ declare <vscale x 16 x half> @llvm.vp.minimum.nxv16f16(<vscale x 16 x half>, <vs
define <vscale x 16 x half> @vfmin_vv_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vfmin_vv_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
; ZVFH-NEXT: vmfeq.vv v17, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v17
; ZVFH-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -969,9 +964,8 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: slli a1, a1, 3
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v7, v0
; ZVFH-NEXT: vmfeq.vv v25, v8, v8, v0.t
; ZVFH-NEXT: vmv1r.v v0, v25
; ZVFH-NEXT: vmerge.vvm v24, v8, v16, v0
@@ -1286,9 +1280,8 @@ declare <vscale x 1 x float> @llvm.vp.minimum.nxv1f32(<vscale x 1 x float>, <vsc
define <vscale x 1 x float> @vfmin_vv_nxv1f32(<vscale x 1 x float> %va, <vscale x 1 x float> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -1320,9 +1313,8 @@ declare <vscale x 2 x float> @llvm.vp.minimum.nxv2f32(<vscale x 2 x float>, <vsc
define <vscale x 2 x float> @vfmin_vv_nxv2f32(<vscale x 2 x float> %va, <vscale x 2 x float> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -1354,9 +1346,8 @@ declare <vscale x 4 x float> @llvm.vp.minimum.nxv4f32(<vscale x 4 x float>, <vsc
define <vscale x 4 x float> @vfmin_vv_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x float> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -1390,9 +1381,8 @@ declare <vscale x 8 x float> @llvm.vp.minimum.nxv8f32(<vscale x 8 x float>, <vsc
define <vscale x 8 x float> @vfmin_vv_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x float> %vb, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -1426,9 +1416,8 @@ declare <vscale x 1 x double> @llvm.vp.minimum.nxv1f64(<vscale x 1 x double>, <v
define <vscale x 1 x double> @vfmin_vv_nxv1f64(<vscale x 1 x double> %va, <vscale x 1 x double> %vb, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv1f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v8, v0.t
; CHECK-NEXT: vmerge.vvm v11, v8, v9, v0
; CHECK-NEXT: vmv1r.v v0, v10
@@ -1460,9 +1449,8 @@ declare <vscale x 2 x double> @llvm.vp.minimum.nxv2f64(<vscale x 2 x double>, <v
define <vscale x 2 x double> @vfmin_vv_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x double> %vb, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vmfeq.vv v13, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: vmerge.vvm v14, v8, v10, v0
@@ -1496,9 +1484,8 @@ declare <vscale x 4 x double> @llvm.vp.minimum.nxv4f64(<vscale x 4 x double>, <v
define <vscale x 4 x double> @vfmin_vv_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x double> %vb, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vfmin_vv_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vmfeq.vv v17, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v17
; CHECK-NEXT: vmerge.vvm v20, v8, v12, v0
@@ -1538,9 +1525,8 @@ define <vscale x 8 x double> @vfmin_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v7, v0
; CHECK-NEXT: vmfeq.vv v25, v8, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v25
; CHECK-NEXT: vmerge.vvm v24, v8, v16, v0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
index 9cc9306807cc4c..b569efc7447da6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
@@ -703,11 +703,10 @@ define <vscale x 16 x i32> @fshl_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re32.v v24, (a0)
; CHECK-NEXT: li a0, 31
-; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
; CHECK-NEXT: vand.vx v8, v24, a0, v0.t
; CHECK-NEXT: vsll.vv v8, v16, v8, v0.t
; CHECK-NEXT: vnot.v v16, v24, v0.t
@@ -883,11 +882,10 @@ define <vscale x 7 x i64> @fshl_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re64.v v24, (a0)
; CHECK-NEXT: li a0, 63
-; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; CHECK-NEXT: vand.vx v8, v24, a0, v0.t
; CHECK-NEXT: vsll.vv v8, v16, v8, v0.t
; CHECK-NEXT: vnot.v v16, v24, v0.t
@@ -955,11 +953,10 @@ define <vscale x 8 x i64> @fshl_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a2, sp, 16
; CHECK-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; CHECK-NEXT: vmv8r.v v16, v8
; CHECK-NEXT: vl8re64.v v24, (a0)
; CHECK-NEXT: li a0, 63
-; CHECK-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; CHECK-NEXT: vand.vx v8, v24, a0, v0.t
; CHECK-NEXT: vsll.vv v8, v16, v8, v0.t
; CHECK-NEXT: vnot.v v16, v24, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll b/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
index e49484bd7e6560..cd678c97407218 100644
--- a/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/masked-tama.ll
@@ -1288,10 +1288,9 @@ declare <vscale x 1 x i8> @llvm.riscv.viota.mask.nxv1i8(
define <vscale x 1 x i8> @intrinsic_viota_mask_m_nxv1i8_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_viota_mask_m_nxv1i8_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: viota.m v8, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -1313,10 +1312,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsbf.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vmsbf.m v8, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
@@ -1445,10 +1443,9 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsbf.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
; CHECK-NEXT: vmsbf.m v8, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v8
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
index 895bfcdf48af63..9aa26e59c6a035 100644
--- a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
@@ -633,11 +633,10 @@ declare <vscale x 8 x half> @llvm.vp.nearbyint.nxv8f16(<vscale x 8 x half>, <vsc
define <vscale x 8 x half> @vp_nearbyint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -720,11 +719,10 @@ declare <vscale x 16 x half> @llvm.vp.nearbyint.nxv16f16(<vscale x 16 x half>, <
define <vscale x 16 x half> @vp_nearbyint_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -807,11 +805,10 @@ declare <vscale x 32 x half> @llvm.vp.nearbyint.nxv32f16(<vscale x 32 x half>, <
define <vscale x 32 x half> @vp_nearbyint_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_nearbyint_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1039,9 +1036,8 @@ declare <vscale x 4 x float> @llvm.vp.nearbyint.nxv4f32(<vscale x 4 x float>, <v
define <vscale x 4 x float> @vp_nearbyint_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1084,9 +1080,8 @@ declare <vscale x 8 x float> @llvm.vp.nearbyint.nxv8f32(<vscale x 8 x float>, <v
define <vscale x 8 x float> @vp_nearbyint_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1129,9 +1124,8 @@ declare <vscale x 16 x float> @llvm.vp.nearbyint.nxv16f32(<vscale x 16 x float>,
define <vscale x 16 x float> @vp_nearbyint_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1216,11 +1210,10 @@ declare <vscale x 2 x double> @llvm.vp.nearbyint.nxv2f64(<vscale x 2 x double>,
define <vscale x 2 x double> @vp_nearbyint_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1261,11 +1254,10 @@ declare <vscale x 4 x double> @llvm.vp.nearbyint.nxv4f64(<vscale x 4 x double>,
define <vscale x 4 x double> @vp_nearbyint_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1306,11 +1298,10 @@ declare <vscale x 7 x double> @llvm.vp.nearbyint.nxv7f64(<vscale x 7 x double>,
define <vscale x 7 x double> @vp_nearbyint_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1351,11 +1342,10 @@ declare <vscale x 8 x double> @llvm.vp.nearbyint.nxv8f64(<vscale x 8 x double>,
define <vscale x 8 x double> @vp_nearbyint_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_nearbyint_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
index 1f0b10328740e9..70ea1bc78d2e53 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
@@ -596,11 +596,10 @@ declare <vscale x 8 x half> @llvm.vp.rint.nxv8f16(<vscale x 8 x half>, <vscale x
define <vscale x 8 x half> @vp_rint_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -675,11 +674,10 @@ declare <vscale x 16 x half> @llvm.vp.rint.nxv16f16(<vscale x 16 x half>, <vscal
define <vscale x 16 x half> @vp_rint_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -754,11 +752,10 @@ declare <vscale x 32 x half> @llvm.vp.rint.nxv32f16(<vscale x 32 x half>, <vscal
define <vscale x 32 x half> @vp_rint_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_rint_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -981,9 +978,8 @@ declare <vscale x 4 x float> @llvm.vp.rint.nxv4f32(<vscale x 4 x float>, <vscale
define <vscale x 4 x float> @vp_rint_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1022,9 +1018,8 @@ declare <vscale x 8 x float> @llvm.vp.rint.nxv8f32(<vscale x 8 x float>, <vscale
define <vscale x 8 x float> @vp_rint_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1063,9 +1058,8 @@ declare <vscale x 16 x float> @llvm.vp.rint.nxv16f32(<vscale x 16 x float>, <vsc
define <vscale x 16 x float> @vp_rint_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1142,11 +1136,10 @@ declare <vscale x 2 x double> @llvm.vp.rint.nxv2f64(<vscale x 2 x double>, <vsca
define <vscale x 2 x double> @vp_rint_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1183,11 +1176,10 @@ declare <vscale x 4 x double> @llvm.vp.rint.nxv4f64(<vscale x 4 x double>, <vsca
define <vscale x 4 x double> @vp_rint_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1224,11 +1216,10 @@ declare <vscale x 7 x double> @llvm.vp.rint.nxv7f64(<vscale x 7 x double>, <vsca
define <vscale x 7 x double> @vp_rint_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1265,11 +1256,10 @@ declare <vscale x 8 x double> @llvm.vp.rint.nxv8f64(<vscale x 8 x double>, <vsca
define <vscale x 8 x double> @vp_rint_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_rint_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
index 770a71502a7c7d..4e15f13570583c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
@@ -649,11 +649,10 @@ declare <vscale x 8 x half> @llvm.vp.round.nxv8f16(<vscale x 8 x half>, <vscale
define <vscale x 8 x half> @vp_round_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -736,11 +735,10 @@ declare <vscale x 16 x half> @llvm.vp.round.nxv16f16(<vscale x 16 x half>, <vsca
define <vscale x 16 x half> @vp_round_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -823,11 +821,10 @@ declare <vscale x 32 x half> @llvm.vp.round.nxv32f16(<vscale x 32 x half>, <vsca
define <vscale x 32 x half> @vp_round_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_round_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1071,9 +1068,8 @@ declare <vscale x 4 x float> @llvm.vp.round.nxv4f32(<vscale x 4 x float>, <vscal
define <vscale x 4 x float> @vp_round_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1116,9 +1112,8 @@ declare <vscale x 8 x float> @llvm.vp.round.nxv8f32(<vscale x 8 x float>, <vscal
define <vscale x 8 x float> @vp_round_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1161,9 +1156,8 @@ declare <vscale x 16 x float> @llvm.vp.round.nxv16f32(<vscale x 16 x float>, <vs
define <vscale x 16 x float> @vp_round_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1248,11 +1242,10 @@ declare <vscale x 2 x double> @llvm.vp.round.nxv2f64(<vscale x 2 x double>, <vsc
define <vscale x 2 x double> @vp_round_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1293,11 +1286,10 @@ declare <vscale x 4 x double> @llvm.vp.round.nxv4f64(<vscale x 4 x double>, <vsc
define <vscale x 4 x double> @vp_round_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1338,11 +1330,10 @@ declare <vscale x 7 x double> @llvm.vp.round.nxv7f64(<vscale x 7 x double>, <vsc
define <vscale x 7 x double> @vp_round_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1383,11 +1374,10 @@ declare <vscale x 8 x double> @llvm.vp.round.nxv8f64(<vscale x 8 x double>, <vsc
define <vscale x 8 x double> @vp_round_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_round_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
index ddf95b8129c09c..f1510a8c531812 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
@@ -649,11 +649,10 @@ declare <vscale x 8 x half> @llvm.vp.roundeven.nxv8f16(<vscale x 8 x half>, <vsc
define <vscale x 8 x half> @vp_roundeven_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -736,11 +735,10 @@ declare <vscale x 16 x half> @llvm.vp.roundeven.nxv16f16(<vscale x 16 x half>, <
define <vscale x 16 x half> @vp_roundeven_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -823,11 +821,10 @@ declare <vscale x 32 x half> @llvm.vp.roundeven.nxv32f16(<vscale x 32 x half>, <
define <vscale x 32 x half> @vp_roundeven_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundeven_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1071,9 +1068,8 @@ declare <vscale x 4 x float> @llvm.vp.roundeven.nxv4f32(<vscale x 4 x float>, <v
define <vscale x 4 x float> @vp_roundeven_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1116,9 +1112,8 @@ declare <vscale x 8 x float> @llvm.vp.roundeven.nxv8f32(<vscale x 8 x float>, <v
define <vscale x 8 x float> @vp_roundeven_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1161,9 +1156,8 @@ declare <vscale x 16 x float> @llvm.vp.roundeven.nxv16f32(<vscale x 16 x float>,
define <vscale x 16 x float> @vp_roundeven_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1248,11 +1242,10 @@ declare <vscale x 2 x double> @llvm.vp.roundeven.nxv2f64(<vscale x 2 x double>,
define <vscale x 2 x double> @vp_roundeven_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1293,11 +1286,10 @@ declare <vscale x 4 x double> @llvm.vp.roundeven.nxv4f64(<vscale x 4 x double>,
define <vscale x 4 x double> @vp_roundeven_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1338,11 +1330,10 @@ declare <vscale x 7 x double> @llvm.vp.roundeven.nxv7f64(<vscale x 7 x double>,
define <vscale x 7 x double> @vp_roundeven_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1383,11 +1374,10 @@ declare <vscale x 8 x double> @llvm.vp.roundeven.nxv8f64(<vscale x 8 x double>,
define <vscale x 8 x double> @vp_roundeven_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundeven_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
index a27113a107556f..dd7db58ccdf340 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
@@ -649,11 +649,10 @@ declare <vscale x 8 x half> @llvm.vp.roundtozero.nxv8f16(<vscale x 8 x half>, <v
define <vscale x 8 x half> @vp_roundtozero_nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv8f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v10, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI18_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFH-NEXT: vmv1r.v v10, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI18_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI18_0)(a0)
; ZVFH-NEXT: vfabs.v v12, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, mu
; ZVFH-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -736,11 +735,10 @@ declare <vscale x 16 x half> @llvm.vp.roundtozero.nxv16f16(<vscale x 16 x half>,
define <vscale x 16 x half> @vp_roundtozero_nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv16f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v12, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI20_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFH-NEXT: vmv1r.v v12, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI20_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI20_0)(a0)
; ZVFH-NEXT: vfabs.v v16, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, mu
; ZVFH-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -823,11 +821,10 @@ declare <vscale x 32 x half> @llvm.vp.roundtozero.nxv32f16(<vscale x 32 x half>,
define <vscale x 32 x half> @vp_roundtozero_nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; ZVFH-LABEL: vp_roundtozero_nxv32f16:
; ZVFH: # %bb.0:
-; ZVFH-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; ZVFH-NEXT: vmv1r.v v16, v0
-; ZVFH-NEXT: lui a1, %hi(.LCPI22_0)
-; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a1)
; ZVFH-NEXT: vsetvli zero, a0, e16, m8, ta, ma
+; ZVFH-NEXT: vmv1r.v v16, v0
+; ZVFH-NEXT: lui a0, %hi(.LCPI22_0)
+; ZVFH-NEXT: flh fa5, %lo(.LCPI22_0)(a0)
; ZVFH-NEXT: vfabs.v v24, v8, v0.t
; ZVFH-NEXT: vsetvli zero, zero, e16, m8, ta, mu
; ZVFH-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1071,9 +1068,8 @@ declare <vscale x 4 x float> @llvm.vp.roundtozero.nxv4f32(<vscale x 4 x float>,
define <vscale x 4 x float> @vp_roundtozero_nxv4f32(<vscale x 4 x float> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1116,9 +1112,8 @@ declare <vscale x 8 x float> @llvm.vp.roundtozero.nxv8f32(<vscale x 8 x float>,
define <vscale x 8 x float> @vp_roundtozero_nxv8f32(<vscale x 8 x float> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1161,9 +1156,8 @@ declare <vscale x 16 x float> @llvm.vp.roundtozero.nxv16f32(<vscale x 16 x float
define <vscale x 16 x float> @vp_roundtozero_nxv16f32(<vscale x 16 x float> %va, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv16f32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: lui a0, 307200
; CHECK-NEXT: fmv.w.x fa5, a0
@@ -1248,11 +1242,10 @@ declare <vscale x 2 x double> @llvm.vp.roundtozero.nxv2f64(<vscale x 2 x double>
define <vscale x 2 x double> @vp_roundtozero_nxv2f64(<vscale x 2 x double> %va, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv2f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI36_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vmv1r.v v10, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI36_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI36_0)(a0)
; CHECK-NEXT: vfabs.v v12, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v12, fa5, v0.t
@@ -1293,11 +1286,10 @@ declare <vscale x 4 x double> @llvm.vp.roundtozero.nxv4f64(<vscale x 4 x double>
define <vscale x 4 x double> @vp_roundtozero_nxv4f64(<vscale x 4 x double> %va, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv4f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v12, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI38_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, ma
+; CHECK-NEXT: vmv1r.v v12, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI38_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI38_0)(a0)
; CHECK-NEXT: vfabs.v v16, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v12, v16, fa5, v0.t
@@ -1338,11 +1330,10 @@ declare <vscale x 7 x double> @llvm.vp.roundtozero.nxv7f64(<vscale x 7 x double>
define <vscale x 7 x double> @vp_roundtozero_nxv7f64(<vscale x 7 x double> %va, <vscale x 7 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv7f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI40_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI40_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI40_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
@@ -1383,11 +1374,10 @@ declare <vscale x 8 x double> @llvm.vp.roundtozero.nxv8f64(<vscale x 8 x double>
define <vscale x 8 x double> @vp_roundtozero_nxv8f64(<vscale x 8 x double> %va, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vp_roundtozero_nxv8f64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v16, v0
-; CHECK-NEXT: lui a1, %hi(.LCPI42_0)
-; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a1)
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
+; CHECK-NEXT: vmv1r.v v16, v0
+; CHECK-NEXT: lui a0, %hi(.LCPI42_0)
+; CHECK-NEXT: fld fa5, %lo(.LCPI42_0)(a0)
; CHECK-NEXT: vfabs.v v24, v8, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vmflt.vf v16, v24, fa5, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
index 02279362ca14be..d329979857a6b9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-peephole-vmerge-vops.ll
@@ -941,9 +941,8 @@ declare <vscale x 2 x i32> @llvm.riscv.vredsum.nxv2i32.nxv2i32(
define <vscale x 2 x i32> @vredsum(<vscale x 2 x i32> %passthru, <vscale x 2 x i32> %x, <vscale x 2 x i32> %y, <vscale x 2 x i1> %m, i64 %vl) {
; CHECK-LABEL: vredsum:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, tu, ma
+; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vredsum.vs v11, v9, v10
; CHECK-NEXT: vmerge.vvm v8, v8, v11, v0
; CHECK-NEXT: ret
@@ -966,9 +965,8 @@ define <vscale x 2 x float> @vfredusum(<vscale x 2 x float> %passthru, <vscale x
; CHECK-LABEL: vfredusum:
; CHECK: # %bb.0:
; CHECK-NEXT: fsrmi a1, 0
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m1, tu, ma
+; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vfredusum.vs v11, v9, v10
; CHECK-NEXT: vmerge.vvm v8, v8, v11, v0
; CHECK-NEXT: fsrm a1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vcpop.ll b/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
index c3a60ebf92b3c7..6b35e4767b239a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vcpop.ll
@@ -43,10 +43,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv1i1(
define iXLen @intrinsic_vcpop_mask_m_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -98,10 +97,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv2i1(
define iXLen @intrinsic_vcpop_mask_m_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -139,10 +137,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv4i1(
define iXLen @intrinsic_vcpop_mask_m_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -180,10 +177,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv8i1(
define iXLen @intrinsic_vcpop_mask_m_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -221,10 +217,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv16i1(
define iXLen @intrinsic_vcpop_mask_m_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -262,10 +257,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv32i1(
define iXLen @intrinsic_vcpop_mask_m_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -303,10 +297,9 @@ declare iXLen @llvm.riscv.vcpop.mask.iXLen.nxv64i1(
define iXLen @intrinsic_vcpop_mask_m_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vcpop_mask_m_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll b/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
index 9084384c2e17a5..6435c1c14e061e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-reassociations.ll
@@ -120,9 +120,8 @@ entry:
define <vscale x 1 x i8> @vadd_vv_passthru(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2) nounwind {
; CHECK-LABEL: vadd_vv_passthru:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, tu, ma
+; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vadd.vv v10, v8, v9
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vadd.vv v9, v8, v8
@@ -153,9 +152,8 @@ entry:
define <vscale x 1 x i8> @vadd_vv_passthru_negative(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2) nounwind {
; CHECK-LABEL: vadd_vv_passthru_negative:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, tu, ma
+; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vadd.vv v10, v8, v9
; CHECK-NEXT: vadd.vv v9, v8, v10
; CHECK-NEXT: vadd.vv v8, v8, v9
@@ -185,9 +183,8 @@ entry:
define <vscale x 1 x i8> @vadd_vv_mask(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2, <vscale x 1 x i1> %m) nounwind {
; CHECK-LABEL: vadd_vv_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v10, v8
; CHECK-NEXT: vadd.vv v10, v8, v9, v0.t
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vadd.vv v9, v8, v8, v0.t
@@ -221,9 +218,8 @@ entry:
define <vscale x 1 x i8> @vadd_vv_mask_negative(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, i32 %2, <vscale x 1 x i1> %m, <vscale x 1 x i1> %m2) nounwind {
; CHECK-LABEL: vadd_vv_mask_negative:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v8
; CHECK-NEXT: vadd.vv v11, v8, v9, v0.t
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vadd.vv v9, v8, v11, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfirst.ll b/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
index 15d4a8ee6fcb1e..c510121ee3ebed 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfirst.ll
@@ -43,10 +43,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv1i1(
define iXLen @intrinsic_vfirst_mask_m_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -98,10 +97,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv2i1(
define iXLen @intrinsic_vfirst_mask_m_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -139,10 +137,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv4i1(
define iXLen @intrinsic_vfirst_mask_m_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -180,10 +177,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv8i1(
define iXLen @intrinsic_vfirst_mask_m_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -221,10 +217,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv16i1(
define iXLen @intrinsic_vfirst_mask_m_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -262,10 +257,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv32i1(
define iXLen @intrinsic_vfirst_mask_m_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
@@ -303,10 +297,9 @@ declare iXLen @llvm.riscv.vfirst.mask.iXLen.nxv64i1(
define iXLen @intrinsic_vfirst_mask_m_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, iXLen %2) nounwind {
; CHECK-LABEL: intrinsic_vfirst_mask_m_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
; CHECK-NEXT: vfirst.m a0, v9, v0.t
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
index e08293f86da371..7ca1983e8b32c0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
@@ -7796,10 +7796,9 @@ define <vscale x 16 x half> @vfnmadd_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vfnmadd_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: lui a1, 8
-; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFHMIN-NEXT: vxor.vx v12, v12, a1, v0.t
; ZVFHMIN-NEXT: vxor.vx v16, v16, a1, v0.t
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -8254,10 +8253,9 @@ define <vscale x 16 x half> @vfnmsub_vv_nxv16f16(<vscale x 16 x half> %va, <vsca
;
; ZVFHMIN-LABEL: vfnmsub_vv_nxv16f16:
; ZVFHMIN: # %bb.0:
-; ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFHMIN-NEXT: vmv4r.v v4, v8
; ZVFHMIN-NEXT: lui a1, 8
-; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFHMIN-NEXT: vxor.vx v12, v12, a1, v0.t
; ZVFHMIN-NEXT: vxor.vx v16, v16, a1, v0.t
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -8714,12 +8712,11 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: add a2, sp, a2
; ZVFHMIN-NEXT: addi a2, a2, 16
; ZVFHMIN-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; ZVFHMIN-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; ZVFHMIN-NEXT: vsetvli zero, a1, e16, m8, ta, ma
; ZVFHMIN-NEXT: vmv8r.v v24, v8
; ZVFHMIN-NEXT: vl8re16.v v8, (a0)
; ZVFHMIN-NEXT: lui a2, 8
; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: vsetvli zero, a1, e16, m8, ta, ma
; ZVFHMIN-NEXT: vxor.vx v8, v8, a2, v0.t
; ZVFHMIN-NEXT: slli a2, a0, 1
; ZVFHMIN-NEXT: mv a3, a1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll b/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
index 469b3ed99542e6..ff4f3e24ec17b0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vl-opt.ll
@@ -111,9 +111,8 @@ define <vscale x 4 x i32> @different_vl_with_ta(<vscale x 4 x i32> %a, <vscale x
define <vscale x 4 x i32> @different_vl_with_tu(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: different_vl_with_tu:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv2r.v v14, v10
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
+; CHECK-NEXT: vmv2r.v v14, v10
; CHECK-NEXT: vadd.vv v14, v10, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v8, v14, v10
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
index 57b12450ceb56c..d8bff08ea5513f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32-dead.ll
@@ -51,10 +51,9 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_dead_vl(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask) {
; CHECK-LABEL: test_vlseg2ff_mask_dead_vl:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
index 129f800571bbcd..03aee1881503a1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv32.ll
@@ -25,10 +25,9 @@ entry:
define <vscale x 1 x i8> @test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -65,10 +64,9 @@ entry:
define <vscale x 2 x i8> @test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -105,10 +103,9 @@ entry:
define <vscale x 4 x i8> @test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -145,10 +142,9 @@ entry:
define <vscale x 8 x i8> @test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -185,10 +181,9 @@ entry:
define <vscale x 16 x i8> @test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -225,10 +220,9 @@ entry:
define <vscale x 32 x i8> @test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 32 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -265,11 +259,10 @@ entry:
define <vscale x 1 x i8> @test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t(target("riscv.vector.tuple", <vscale x 1 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -306,11 +299,10 @@ entry:
define <vscale x 2 x i8> @test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -347,11 +339,10 @@ entry:
define <vscale x 4 x i8> @test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -388,11 +379,10 @@ entry:
define <vscale x 8 x i8> @test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -429,11 +419,10 @@ entry:
define <vscale x 16 x i8> @test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -470,12 +459,11 @@ entry:
define <vscale x 1 x i8> @test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t(target("riscv.vector.tuple", <vscale x 1 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -512,12 +500,11 @@ entry:
define <vscale x 2 x i8> @test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -554,12 +541,11 @@ entry:
define <vscale x 4 x i8> @test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -596,12 +582,11 @@ entry:
define <vscale x 8 x i8> @test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -638,12 +623,11 @@ entry:
define <vscale x 16 x i8> @test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -680,13 +664,12 @@ entry:
define <vscale x 1 x i8> @test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t(target("riscv.vector.tuple", <vscale x 1 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -723,13 +706,12 @@ entry:
define <vscale x 2 x i8> @test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -766,13 +748,12 @@ entry:
define <vscale x 4 x i8> @test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -809,13 +790,12 @@ entry:
define <vscale x 8 x i8> @test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -852,14 +832,13 @@ entry:
define <vscale x 1 x i8> @test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t(target("riscv.vector.tuple", <vscale x 1 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -896,14 +875,13 @@ entry:
define <vscale x 2 x i8> @test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -940,14 +918,13 @@ entry:
define <vscale x 4 x i8> @test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -984,14 +961,13 @@ entry:
define <vscale x 8 x i8> @test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1028,7 +1004,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t(target("riscv.vector.tuple", <vscale x 1 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1036,7 +1012,6 @@ define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1073,7 +1048,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1081,7 +1056,6 @@ define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1118,7 +1092,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1126,7 +1100,6 @@ define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1163,7 +1136,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1171,7 +1144,6 @@ define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1208,7 +1180,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t(target("riscv.vector.tuple", <vscale x 1 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1217,7 +1189,6 @@ define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1254,7 +1225,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1263,7 +1234,6 @@ define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1300,7 +1270,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1309,7 +1279,6 @@ define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1346,7 +1315,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1355,7 +1324,6 @@ define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1391,10 +1359,9 @@ entry:
define <vscale x 1 x i16> @test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1430,10 +1397,9 @@ entry:
define <vscale x 2 x i16> @test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1469,10 +1435,9 @@ entry:
define <vscale x 4 x i16> @test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1508,10 +1473,9 @@ entry:
define <vscale x 8 x i16> @test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1547,10 +1511,9 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1586,11 +1549,10 @@ entry:
define <vscale x 1 x i16> @test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1626,11 +1588,10 @@ entry:
define <vscale x 2 x i16> @test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1666,11 +1627,10 @@ entry:
define <vscale x 4 x i16> @test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1706,11 +1666,10 @@ entry:
define <vscale x 8 x i16> @test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1746,12 +1705,11 @@ entry:
define <vscale x 1 x i16> @test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1787,12 +1745,11 @@ entry:
define <vscale x 2 x i16> @test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1828,12 +1785,11 @@ entry:
define <vscale x 4 x i16> @test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1869,12 +1825,11 @@ entry:
define <vscale x 8 x i16> @test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1910,13 +1865,12 @@ entry:
define <vscale x 1 x i16> @test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1952,13 +1906,12 @@ entry:
define <vscale x 2 x i16> @test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -1994,13 +1947,12 @@ entry:
define <vscale x 4 x i16> @test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2036,14 +1988,13 @@ entry:
define <vscale x 1 x i16> @test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2079,14 +2030,13 @@ entry:
define <vscale x 2 x i16> @test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2122,14 +2072,13 @@ entry:
define <vscale x 4 x i16> @test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2165,7 +2114,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2173,7 +2122,6 @@ define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2209,7 +2157,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2217,7 +2165,6 @@ define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2253,7 +2200,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2261,7 +2208,6 @@ define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2297,7 +2243,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2306,7 +2252,6 @@ define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2342,7 +2287,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2351,7 +2296,6 @@ define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2387,7 +2331,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2396,7 +2340,6 @@ define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2432,10 +2375,9 @@ entry:
define <vscale x 1 x i32> @test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2471,10 +2413,9 @@ entry:
define <vscale x 2 x i32> @test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2510,10 +2451,9 @@ entry:
define <vscale x 4 x i32> @test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2549,10 +2489,9 @@ entry:
define <vscale x 8 x i32> @test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2588,11 +2527,10 @@ entry:
define <vscale x 1 x i32> @test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2628,11 +2566,10 @@ entry:
define <vscale x 2 x i32> @test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2668,11 +2605,10 @@ entry:
define <vscale x 4 x i32> @test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2708,12 +2644,11 @@ entry:
define <vscale x 1 x i32> @test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2749,12 +2684,11 @@ entry:
define <vscale x 2 x i32> @test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2790,12 +2724,11 @@ entry:
define <vscale x 4 x i32> @test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2831,13 +2764,12 @@ entry:
define <vscale x 1 x i32> @test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2873,13 +2805,12 @@ entry:
define <vscale x 2 x i32> @test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2915,14 +2846,13 @@ entry:
define <vscale x 1 x i32> @test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -2958,14 +2888,13 @@ entry:
define <vscale x 2 x i32> @test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3001,7 +2930,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3009,7 +2938,6 @@ define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3045,7 +2973,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3053,7 +2981,6 @@ define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3089,7 +3016,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3098,7 +3025,6 @@ define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3134,7 +3060,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3143,7 +3069,6 @@ define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3179,10 +3104,9 @@ entry:
define <vscale x 1 x i64> @test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3218,10 +3142,9 @@ entry:
define <vscale x 2 x i64> @test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3257,10 +3180,9 @@ entry:
define <vscale x 4 x i64> @test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3296,11 +3218,10 @@ entry:
define <vscale x 1 x i64> @test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3336,11 +3257,10 @@ entry:
define <vscale x 2 x i64> @test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3376,12 +3296,11 @@ entry:
define <vscale x 1 x i64> @test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3417,12 +3336,11 @@ entry:
define <vscale x 2 x i64> @test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3458,13 +3376,12 @@ entry:
define <vscale x 1 x i64> @test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg5e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3500,14 +3417,13 @@ entry:
define <vscale x 1 x i64> @test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg6e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3543,7 +3459,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3551,7 +3467,6 @@ define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg7e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3587,7 +3502,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3596,7 +3511,6 @@ define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg8e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3631,10 +3545,9 @@ entry:
define <vscale x 1 x half> @test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3669,10 +3582,9 @@ entry:
define <vscale x 2 x half> @test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3707,10 +3619,9 @@ entry:
define <vscale x 4 x half> @test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3745,10 +3656,9 @@ entry:
define <vscale x 8 x half> @test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3783,10 +3693,9 @@ entry:
define <vscale x 16 x half> @test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3821,11 +3730,10 @@ entry:
define <vscale x 1 x half> @test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3860,11 +3768,10 @@ entry:
define <vscale x 2 x half> @test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3899,11 +3806,10 @@ entry:
define <vscale x 4 x half> @test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3938,11 +3844,10 @@ entry:
define <vscale x 8 x half> @test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -3977,12 +3882,11 @@ entry:
define <vscale x 1 x half> @test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4017,12 +3921,11 @@ entry:
define <vscale x 2 x half> @test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4057,12 +3960,11 @@ entry:
define <vscale x 4 x half> @test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4097,12 +3999,11 @@ entry:
define <vscale x 8 x half> @test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4137,13 +4038,12 @@ entry:
define <vscale x 1 x half> @test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4178,13 +4078,12 @@ entry:
define <vscale x 2 x half> @test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4219,13 +4118,12 @@ entry:
define <vscale x 4 x half> @test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4260,14 +4158,13 @@ entry:
define <vscale x 1 x half> @test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4302,14 +4199,13 @@ entry:
define <vscale x 2 x half> @test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4344,14 +4240,13 @@ entry:
define <vscale x 4 x half> @test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4386,7 +4281,7 @@ entry:
define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4394,7 +4289,6 @@ define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4429,7 +4323,7 @@ entry:
define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4437,7 +4331,6 @@ define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4472,7 +4365,7 @@ entry:
define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4480,7 +4373,6 @@ define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4515,7 +4407,7 @@ entry:
define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4524,7 +4416,6 @@ define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4559,7 +4450,7 @@ entry:
define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4568,7 +4459,6 @@ define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4603,7 +4493,7 @@ entry:
define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4612,7 +4502,6 @@ define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4647,10 +4536,9 @@ entry:
define <vscale x 1 x float> @test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4685,10 +4573,9 @@ entry:
define <vscale x 2 x float> @test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4723,10 +4610,9 @@ entry:
define <vscale x 4 x float> @test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4761,10 +4647,9 @@ entry:
define <vscale x 8 x float> @test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4799,11 +4684,10 @@ entry:
define <vscale x 1 x float> @test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4838,11 +4722,10 @@ entry:
define <vscale x 2 x float> @test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4877,11 +4760,10 @@ entry:
define <vscale x 4 x float> @test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4916,12 +4798,11 @@ entry:
define <vscale x 1 x float> @test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4956,12 +4837,11 @@ entry:
define <vscale x 2 x float> @test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -4996,12 +4876,11 @@ entry:
define <vscale x 4 x float> @test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5036,13 +4915,12 @@ entry:
define <vscale x 1 x float> @test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5077,13 +4955,12 @@ entry:
define <vscale x 2 x float> @test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5118,14 +4995,13 @@ entry:
define <vscale x 1 x float> @test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5160,14 +5036,13 @@ entry:
define <vscale x 2 x float> @test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5202,7 +5077,7 @@ entry:
define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5210,7 +5085,6 @@ define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5245,7 +5119,7 @@ entry:
define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5253,7 +5127,6 @@ define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5288,7 +5161,7 @@ entry:
define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5297,7 +5170,6 @@ define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5332,7 +5204,7 @@ entry:
define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5341,7 +5213,6 @@ define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5376,10 +5247,9 @@ entry:
define <vscale x 1 x double> @test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5414,10 +5284,9 @@ entry:
define <vscale x 2 x double> @test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5452,10 +5321,9 @@ entry:
define <vscale x 4 x double> @test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5490,11 +5358,10 @@ entry:
define <vscale x 1 x double> @test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5529,11 +5396,10 @@ entry:
define <vscale x 2 x double> @test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5568,12 +5434,11 @@ entry:
define <vscale x 1 x double> @test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5608,12 +5473,11 @@ entry:
define <vscale x 2 x double> @test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5648,13 +5512,12 @@ entry:
define <vscale x 1 x double> @test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg5e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5689,14 +5552,13 @@ entry:
define <vscale x 1 x double> @test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg6e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5731,7 +5593,7 @@ entry:
define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5739,7 +5601,6 @@ define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg7e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5774,7 +5635,7 @@ entry:
define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5783,7 +5644,6 @@ define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg8e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5818,10 +5678,9 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5856,10 +5715,9 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5894,10 +5752,9 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5932,10 +5789,9 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -5970,10 +5826,9 @@ entry:
define <vscale x 16 x bfloat> @test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i32 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6008,11 +5863,10 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6047,11 +5901,10 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6086,11 +5939,10 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6125,11 +5977,10 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6164,12 +6015,11 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6204,12 +6054,11 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6244,12 +6093,11 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6284,12 +6132,11 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i32 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6324,13 +6171,12 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6365,13 +6211,12 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6406,13 +6251,12 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6447,14 +6291,13 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6489,14 +6332,13 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6531,14 +6373,13 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6573,7 +6414,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6581,7 +6422,6 @@ define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6616,7 +6456,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6624,7 +6464,6 @@ define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6659,7 +6498,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6667,7 +6506,6 @@ define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6702,7 +6540,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6711,7 +6549,6 @@ define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6746,7 +6583,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6755,7 +6592,6 @@ define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
@@ -6790,7 +6626,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i32 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6799,7 +6635,6 @@ define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sw a0, 0(a2)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
index 5e0f9515806c24..05a5be295cc716 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64-dead.ll
@@ -51,10 +51,9 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_dead_vl(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask) {
; CHECK-LABEL: test_vlseg2ff_mask_dead_vl:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
index 5be6e9e86efec7..85008ff97143f7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vlsegff-rv64.ll
@@ -25,10 +25,9 @@ entry:
define <vscale x 1 x i8> @test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -65,10 +64,9 @@ entry:
define <vscale x 2 x i8> @test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -105,10 +103,9 @@ entry:
define <vscale x 4 x i8> @test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -145,10 +142,9 @@ entry:
define <vscale x 8 x i8> @test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -185,10 +181,9 @@ entry:
define <vscale x 16 x i8> @test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -225,10 +220,9 @@ entry:
define <vscale x 32 x i8> @test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 32 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv32i8_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vlseg2e8ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -265,11 +259,10 @@ entry:
define <vscale x 1 x i8> @test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t(target("riscv.vector.tuple", <vscale x 1 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -306,11 +299,10 @@ entry:
define <vscale x 2 x i8> @test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -347,11 +339,10 @@ entry:
define <vscale x 4 x i8> @test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -388,11 +379,10 @@ entry:
define <vscale x 8 x i8> @test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -429,11 +419,10 @@ entry:
define <vscale x 16 x i8> @test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vlseg3e8ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -470,12 +459,11 @@ entry:
define <vscale x 1 x i8> @test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t(target("riscv.vector.tuple", <vscale x 1 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -512,12 +500,11 @@ entry:
define <vscale x 2 x i8> @test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -554,12 +541,11 @@ entry:
define <vscale x 4 x i8> @test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -596,12 +582,11 @@ entry:
define <vscale x 8 x i8> @test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -638,12 +623,11 @@ entry:
define <vscale x 16 x i8> @test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv16i8_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vlseg4e8ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -680,13 +664,12 @@ entry:
define <vscale x 1 x i8> @test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t(target("riscv.vector.tuple", <vscale x 1 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -723,13 +706,12 @@ entry:
define <vscale x 2 x i8> @test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -766,13 +748,12 @@ entry:
define <vscale x 4 x i8> @test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -809,13 +790,12 @@ entry:
define <vscale x 8 x i8> @test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg5e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -852,14 +832,13 @@ entry:
define <vscale x 1 x i8> @test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t(target("riscv.vector.tuple", <vscale x 1 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -896,14 +875,13 @@ entry:
define <vscale x 2 x i8> @test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -940,14 +918,13 @@ entry:
define <vscale x 4 x i8> @test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -984,14 +961,13 @@ entry:
define <vscale x 8 x i8> @test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg6e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1028,7 +1004,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t(target("riscv.vector.tuple", <vscale x 1 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1036,7 +1012,6 @@ define <vscale x 1 x i8> @test_vlseg7ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1073,7 +1048,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1081,7 +1056,6 @@ define <vscale x 2 x i8> @test_vlseg7ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1118,7 +1092,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1126,7 +1100,6 @@ define <vscale x 4 x i8> @test_vlseg7ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1163,7 +1136,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1171,7 +1144,6 @@ define <vscale x 8 x i8> @test_vlseg7ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_7
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg7e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1208,7 +1180,7 @@ entry:
define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t(target("riscv.vector.tuple", <vscale x 1 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1217,7 +1189,6 @@ define <vscale x 1 x i8> @test_vlseg8ff_mask_nxv1i8_triscv.vector.tuple_nxv1i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1254,7 +1225,7 @@ entry:
define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1263,7 +1234,6 @@ define <vscale x 2 x i8> @test_vlseg8ff_mask_nxv2i8_triscv.vector.tuple_nxv2i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1300,7 +1270,7 @@ entry:
define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1309,7 +1279,6 @@ define <vscale x 4 x i8> @test_vlseg8ff_mask_nxv4i8_triscv.vector.tuple_nxv4i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1346,7 +1315,7 @@ entry:
define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -1355,7 +1324,6 @@ define <vscale x 8 x i8> @test_vlseg8ff_mask_nxv8i8_triscv.vector.tuple_nxv8i8_8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vlseg8e8ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1391,10 +1359,9 @@ entry:
define <vscale x 1 x i16> @test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1430,10 +1397,9 @@ entry:
define <vscale x 2 x i16> @test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1469,10 +1435,9 @@ entry:
define <vscale x 4 x i16> @test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1508,10 +1473,9 @@ entry:
define <vscale x 8 x i16> @test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1547,10 +1511,9 @@ entry:
define <vscale x 16 x i16> @test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16i16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1586,11 +1549,10 @@ entry:
define <vscale x 1 x i16> @test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1626,11 +1588,10 @@ entry:
define <vscale x 2 x i16> @test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1666,11 +1627,10 @@ entry:
define <vscale x 4 x i16> @test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1706,11 +1666,10 @@ entry:
define <vscale x 8 x i16> @test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1746,12 +1705,11 @@ entry:
define <vscale x 1 x i16> @test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1787,12 +1745,11 @@ entry:
define <vscale x 2 x i16> @test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1828,12 +1785,11 @@ entry:
define <vscale x 4 x i16> @test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1869,12 +1825,11 @@ entry:
define <vscale x 8 x i16> @test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8i16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1910,13 +1865,12 @@ entry:
define <vscale x 1 x i16> @test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1952,13 +1906,12 @@ entry:
define <vscale x 2 x i16> @test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -1994,13 +1947,12 @@ entry:
define <vscale x 4 x i16> @test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2036,14 +1988,13 @@ entry:
define <vscale x 1 x i16> @test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2079,14 +2030,13 @@ entry:
define <vscale x 2 x i16> @test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2122,14 +2072,13 @@ entry:
define <vscale x 4 x i16> @test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2165,7 +2114,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2173,7 +2122,6 @@ define <vscale x 1 x i16> @test_vlseg7ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2209,7 +2157,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2217,7 +2165,6 @@ define <vscale x 2 x i16> @test_vlseg7ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2253,7 +2200,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2261,7 +2208,6 @@ define <vscale x 4 x i16> @test_vlseg7ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2297,7 +2243,7 @@ entry:
define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2306,7 +2252,6 @@ define <vscale x 1 x i16> @test_vlseg8ff_mask_nxv1i16_triscv.vector.tuple_nxv2i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2342,7 +2287,7 @@ entry:
define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2351,7 +2296,6 @@ define <vscale x 2 x i16> @test_vlseg8ff_mask_nxv2i16_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2387,7 +2331,7 @@ entry:
define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -2396,7 +2340,6 @@ define <vscale x 4 x i16> @test_vlseg8ff_mask_nxv4i16_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2432,10 +2375,9 @@ entry:
define <vscale x 1 x i32> @test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2471,10 +2413,9 @@ entry:
define <vscale x 2 x i32> @test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2510,10 +2451,9 @@ entry:
define <vscale x 4 x i32> @test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2549,10 +2489,9 @@ entry:
define <vscale x 8 x i32> @test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8i32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2588,11 +2527,10 @@ entry:
define <vscale x 1 x i32> @test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2628,11 +2566,10 @@ entry:
define <vscale x 2 x i32> @test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2668,11 +2605,10 @@ entry:
define <vscale x 4 x i32> @test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2708,12 +2644,11 @@ entry:
define <vscale x 1 x i32> @test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2749,12 +2684,11 @@ entry:
define <vscale x 2 x i32> @test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2790,12 +2724,11 @@ entry:
define <vscale x 4 x i32> @test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4i32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2831,13 +2764,12 @@ entry:
define <vscale x 1 x i32> @test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2873,13 +2805,12 @@ entry:
define <vscale x 2 x i32> @test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2915,14 +2846,13 @@ entry:
define <vscale x 1 x i32> @test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -2958,14 +2888,13 @@ entry:
define <vscale x 2 x i32> @test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3001,7 +2930,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3009,7 +2938,6 @@ define <vscale x 1 x i32> @test_vlseg7ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3045,7 +2973,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3053,7 +2981,6 @@ define <vscale x 2 x i32> @test_vlseg7ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3089,7 +3016,7 @@ entry:
define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3098,7 +3025,6 @@ define <vscale x 1 x i32> @test_vlseg8ff_mask_nxv1i32_triscv.vector.tuple_nxv4i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3134,7 +3060,7 @@ entry:
define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3143,7 +3069,6 @@ define <vscale x 2 x i32> @test_vlseg8ff_mask_nxv2i32_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3179,10 +3104,9 @@ entry:
define <vscale x 1 x i64> @test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3218,10 +3142,9 @@ entry:
define <vscale x 2 x i64> @test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3257,10 +3180,9 @@ entry:
define <vscale x 4 x i64> @test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4i64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3296,11 +3218,10 @@ entry:
define <vscale x 1 x i64> @test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3336,11 +3257,10 @@ entry:
define <vscale x 2 x i64> @test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3376,12 +3296,11 @@ entry:
define <vscale x 1 x i64> @test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3417,12 +3336,11 @@ entry:
define <vscale x 2 x i64> @test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2i64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3458,13 +3376,12 @@ entry:
define <vscale x 1 x i64> @test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg5e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3500,14 +3417,13 @@ entry:
define <vscale x 1 x i64> @test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg6e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3543,7 +3459,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3551,7 +3467,6 @@ define <vscale x 1 x i64> @test_vlseg7ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg7e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3587,7 +3502,7 @@ entry:
define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -3596,7 +3511,6 @@ define <vscale x 1 x i64> @test_vlseg8ff_mask_nxv1i64_triscv.vector.tuple_nxv8i8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg8e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3631,10 +3545,9 @@ entry:
define <vscale x 1 x half> @test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3669,10 +3582,9 @@ entry:
define <vscale x 2 x half> @test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3707,10 +3619,9 @@ entry:
define <vscale x 4 x half> @test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3745,10 +3656,9 @@ entry:
define <vscale x 8 x half> @test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3783,10 +3693,9 @@ entry:
define <vscale x 16 x half> @test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16f16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3821,11 +3730,10 @@ entry:
define <vscale x 1 x half> @test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3860,11 +3768,10 @@ entry:
define <vscale x 2 x half> @test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3899,11 +3806,10 @@ entry:
define <vscale x 4 x half> @test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3938,11 +3844,10 @@ entry:
define <vscale x 8 x half> @test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -3977,12 +3882,11 @@ entry:
define <vscale x 1 x half> @test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4017,12 +3921,11 @@ entry:
define <vscale x 2 x half> @test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4057,12 +3960,11 @@ entry:
define <vscale x 4 x half> @test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4097,12 +3999,11 @@ entry:
define <vscale x 8 x half> @test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8f16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4137,13 +4038,12 @@ entry:
define <vscale x 1 x half> @test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4178,13 +4078,12 @@ entry:
define <vscale x 2 x half> @test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4219,13 +4118,12 @@ entry:
define <vscale x 4 x half> @test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4260,14 +4158,13 @@ entry:
define <vscale x 1 x half> @test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4302,14 +4199,13 @@ entry:
define <vscale x 2 x half> @test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4344,14 +4240,13 @@ entry:
define <vscale x 4 x half> @test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4386,7 +4281,7 @@ entry:
define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4394,7 +4289,6 @@ define <vscale x 1 x half> @test_vlseg7ff_mask_nxv1f16_triscv.vector.tuple_nxv2i
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4429,7 +4323,7 @@ entry:
define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4437,7 +4331,6 @@ define <vscale x 2 x half> @test_vlseg7ff_mask_nxv2f16_triscv.vector.tuple_nxv4i
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4472,7 +4365,7 @@ entry:
define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4480,7 +4373,6 @@ define <vscale x 4 x half> @test_vlseg7ff_mask_nxv4f16_triscv.vector.tuple_nxv8i
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4515,7 +4407,7 @@ entry:
define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4524,7 +4416,6 @@ define <vscale x 1 x half> @test_vlseg8ff_mask_nxv1f16_triscv.vector.tuple_nxv2i
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4559,7 +4450,7 @@ entry:
define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4568,7 +4459,6 @@ define <vscale x 2 x half> @test_vlseg8ff_mask_nxv2f16_triscv.vector.tuple_nxv4i
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4603,7 +4493,7 @@ entry:
define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -4612,7 +4502,6 @@ define <vscale x 4 x half> @test_vlseg8ff_mask_nxv4f16_triscv.vector.tuple_nxv8i
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4647,10 +4536,9 @@ entry:
define <vscale x 1 x float> @test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4685,10 +4573,9 @@ entry:
define <vscale x 2 x float> @test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4723,10 +4610,9 @@ entry:
define <vscale x 4 x float> @test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4761,10 +4647,9 @@ entry:
define <vscale x 8 x float> @test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8f32_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vlseg2e32ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4799,11 +4684,10 @@ entry:
define <vscale x 1 x float> @test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4838,11 +4722,10 @@ entry:
define <vscale x 2 x float> @test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4877,11 +4760,10 @@ entry:
define <vscale x 4 x float> @test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg3e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4916,12 +4798,11 @@ entry:
define <vscale x 1 x float> @test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4956,12 +4837,11 @@ entry:
define <vscale x 2 x float> @test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -4996,12 +4876,11 @@ entry:
define <vscale x 4 x float> @test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4f32_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vlseg4e32ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5036,13 +4915,12 @@ entry:
define <vscale x 1 x float> @test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5077,13 +4955,12 @@ entry:
define <vscale x 2 x float> @test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg5e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5118,14 +4995,13 @@ entry:
define <vscale x 1 x float> @test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5160,14 +5036,13 @@ entry:
define <vscale x 2 x float> @test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg6e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5202,7 +5077,7 @@ entry:
define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5210,7 +5085,6 @@ define <vscale x 1 x float> @test_vlseg7ff_mask_nxv1f32_triscv.vector.tuple_nxv4
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5245,7 +5119,7 @@ entry:
define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5253,7 +5127,6 @@ define <vscale x 2 x float> @test_vlseg7ff_mask_nxv2f32_triscv.vector.tuple_nxv8
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg7e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5288,7 +5161,7 @@ entry:
define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5297,7 +5170,6 @@ define <vscale x 1 x float> @test_vlseg8ff_mask_nxv1f32_triscv.vector.tuple_nxv4
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5332,7 +5204,7 @@ entry:
define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5341,7 +5213,6 @@ define <vscale x 2 x float> @test_vlseg8ff_mask_nxv2f32_triscv.vector.tuple_nxv8
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vlseg8e32ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5376,10 +5247,9 @@ entry:
define <vscale x 1 x double> @test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5414,10 +5284,9 @@ entry:
define <vscale x 2 x double> @test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5452,10 +5321,9 @@ entry:
define <vscale x 4 x double> @test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4f64_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; CHECK-NEXT: vlseg2e64ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5490,11 +5358,10 @@ entry:
define <vscale x 1 x double> @test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5529,11 +5396,10 @@ entry:
define <vscale x 2 x double> @test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg3e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5568,12 +5434,11 @@ entry:
define <vscale x 1 x double> @test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5608,12 +5473,11 @@ entry:
define <vscale x 2 x double> @test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2f64_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; CHECK-NEXT: vlseg4e64ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5648,13 +5512,12 @@ entry:
define <vscale x 1 x double> @test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg5e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5689,14 +5552,13 @@ entry:
define <vscale x 1 x double> @test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg6e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5731,7 +5593,7 @@ entry:
define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5739,7 +5601,6 @@ define <vscale x 1 x double> @test_vlseg7ff_mask_nxv1f64_triscv.vector.tuple_nxv
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg7e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5774,7 +5635,7 @@ entry:
define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -5783,7 +5644,6 @@ define <vscale x 1 x double> @test_vlseg8ff_mask_nxv1f64_triscv.vector.tuple_nxv
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; CHECK-NEXT: vlseg8e64ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5818,10 +5678,9 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t(target("riscv.vector.tuple", <vscale x 2 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5856,10 +5715,9 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5894,10 +5752,9 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t(target("riscv.vector.tuple", <vscale x 8 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5932,10 +5789,9 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t(target("riscv.vector.tuple", <vscale x 16 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -5970,10 +5826,9 @@ entry:
define <vscale x 16 x bfloat> @test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t(target("riscv.vector.tuple", <vscale x 32 x i8>, 2) %val, ptr %base, i64 %vl, <vscale x 16 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg2ff_mask_nxv16bf16_triscv.vector.tuple_nxv32i8_2t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv4r.v v8, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vlseg2e16ff.v v4, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6008,11 +5863,10 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t(target("riscv.vector.tuple", <vscale x 2 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6047,11 +5901,10 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6086,11 +5939,10 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t(target("riscv.vector.tuple", <vscale x 8 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6125,11 +5977,10 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t(target("riscv.vector.tuple", <vscale x 16 x i8>, 3) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg3ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_3t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg3e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6164,12 +6015,11 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t(target("riscv.vector.tuple", <vscale x 2 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6204,12 +6054,11 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6244,12 +6093,11 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t(target("riscv.vector.tuple", <vscale x 8 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6284,12 +6132,11 @@ entry:
define <vscale x 8 x bfloat> @test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t(target("riscv.vector.tuple", <vscale x 16 x i8>, 4) %val, ptr %base, i64 %vl, <vscale x 8 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg4ff_mask_nxv8bf16_triscv.vector.tuple_nxv16i8_4t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv2r.v v6, v8
; CHECK-NEXT: vmv2r.v v8, v10
; CHECK-NEXT: vmv2r.v v10, v12
; CHECK-NEXT: vmv2r.v v12, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vlseg4e16ff.v v6, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6324,13 +6171,12 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t(target("riscv.vector.tuple", <vscale x 2 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6365,13 +6211,12 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6406,13 +6251,12 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t(target("riscv.vector.tuple", <vscale x 8 x i8>, 5) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg5ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_5t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg5e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6447,14 +6291,13 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t(target("riscv.vector.tuple", <vscale x 2 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6489,14 +6332,13 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6531,14 +6373,13 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t(target("riscv.vector.tuple", <vscale x 8 x i8>, 6) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg6ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_6t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
; CHECK-NEXT: vmv1r.v v10, v11
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg6e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6573,7 +6414,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t(target("riscv.vector.tuple", <vscale x 2 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6581,7 +6422,6 @@ define <vscale x 1 x bfloat> @test_vlseg7ff_mask_nxv1bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6616,7 +6456,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6624,7 +6464,6 @@ define <vscale x 2 x bfloat> @test_vlseg7ff_mask_nxv2bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6659,7 +6498,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t(target("riscv.vector.tuple", <vscale x 8 x i8>, 7) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_7t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6667,7 +6506,6 @@ define <vscale x 4 x bfloat> @test_vlseg7ff_mask_nxv4bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v11, v12
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg7e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6702,7 +6540,7 @@ entry:
define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t(target("riscv.vector.tuple", <vscale x 2 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 1 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nxv2i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6711,7 +6549,6 @@ define <vscale x 1 x bfloat> @test_vlseg8ff_mask_nxv1bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6746,7 +6583,7 @@ entry:
define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 2 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nxv4i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6755,7 +6592,6 @@ define <vscale x 2 x bfloat> @test_vlseg8ff_mask_nxv2bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
@@ -6790,7 +6626,7 @@ entry:
define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t(target("riscv.vector.tuple", <vscale x 8 x i8>, 8) %val, ptr %base, i64 %vl, <vscale x 4 x i1> %mask, ptr %outvl) {
; CHECK-LABEL: test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nxv8i8_8t:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v7, v8
; CHECK-NEXT: vmv1r.v v8, v9
; CHECK-NEXT: vmv1r.v v9, v10
@@ -6799,7 +6635,6 @@ define <vscale x 4 x bfloat> @test_vlseg8ff_mask_nxv4bf16_triscv.vector.tuple_nx
; CHECK-NEXT: vmv1r.v v12, v13
; CHECK-NEXT: vmv1r.v v13, v14
; CHECK-NEXT: vmv1r.v v14, v15
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vlseg8e16ff.v v7, (a0), v0.t
; CHECK-NEXT: csrr a0, vl
; CHECK-NEXT: sd a0, 0(a2)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
index b7c990e0251396..babf8de57b7ea2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v9
; CHECK-NEXT: vmfeq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v9
; CHECK-NEXT: vmfeq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v9
; CHECK-NEXT: vmfeq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v10
; CHECK-NEXT: vmfeq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfeq.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfeq_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v12
; CHECK-NEXT: vmfeq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -294,9 +289,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v9
; CHECK-NEXT: vmfeq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -346,9 +340,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v9
; CHECK-NEXT: vmfeq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v10
; CHECK-NEXT: vmfeq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -450,9 +442,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v12
; CHECK-NEXT: vmfeq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -502,9 +493,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v9
; CHECK-NEXT: vmfeq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -554,9 +544,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v10
; CHECK-NEXT: vmfeq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -606,9 +595,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfeq.vv v0, v8, v12
; CHECK-NEXT: vmfeq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -658,10 +646,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfeq.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -706,10 +693,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfeq.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -754,10 +740,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfeq.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -802,10 +787,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfeq.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -850,10 +834,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfeq.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfeq_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfeq.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -898,10 +881,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfeq.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -946,10 +928,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfeq.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -994,10 +975,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfeq.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1042,10 +1022,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfeq.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfeq_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfeq.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1090,10 +1069,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfeq.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfeq.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1138,10 +1116,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfeq.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfeq.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1186,10 +1163,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfeq.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfeq_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfeq.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
index 04fb0977db9cf8..4a9dd2f7d769d2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v9, v8
; CHECK-NEXT: vmfle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v9, v8
; CHECK-NEXT: vmfle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v9, v8
; CHECK-NEXT: vmfle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfle.vv v0, v10, v8
; CHECK-NEXT: vmfle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfge.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfge_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfle.vv v0, v12, v8
; CHECK-NEXT: vmfle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -294,9 +289,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v9, v8
; CHECK-NEXT: vmfle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -346,9 +340,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v9, v8
; CHECK-NEXT: vmfle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfle.vv v0, v10, v8
; CHECK-NEXT: vmfle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -450,9 +442,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfle.vv v0, v12, v8
; CHECK-NEXT: vmfle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -502,9 +493,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v9, v8
; CHECK-NEXT: vmfle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -554,9 +544,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfle.vv v0, v10, v8
; CHECK-NEXT: vmfle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -606,9 +595,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfle.vv v0, v12, v8
; CHECK-NEXT: vmfle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -658,10 +646,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfge.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -706,10 +693,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfge.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -754,10 +740,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfge.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -802,10 +787,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfge.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -850,10 +834,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfge.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfge_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfge.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -898,10 +881,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfge.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -946,10 +928,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfge.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -994,10 +975,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfge.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1042,10 +1022,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfge.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfge_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfge.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1090,10 +1069,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfge.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfge_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfge.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1138,10 +1116,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfge.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfge_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfge.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1186,10 +1163,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfge.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfge_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfge_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfge.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
index 2b7d84c8cadb42..c9c5e84937cecb 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v9, v8
; CHECK-NEXT: vmflt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v9, v8
; CHECK-NEXT: vmflt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v9, v8
; CHECK-NEXT: vmflt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmflt.vv v0, v10, v8
; CHECK-NEXT: vmflt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfgt.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfgt_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmflt.vv v0, v12, v8
; CHECK-NEXT: vmflt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -294,9 +289,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v9, v8
; CHECK-NEXT: vmflt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -346,9 +340,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v9, v8
; CHECK-NEXT: vmflt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmflt.vv v0, v10, v8
; CHECK-NEXT: vmflt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -450,9 +442,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmflt.vv v0, v12, v8
; CHECK-NEXT: vmflt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -502,9 +493,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v9, v8
; CHECK-NEXT: vmflt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -554,9 +544,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmflt.vv v0, v10, v8
; CHECK-NEXT: vmflt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -606,9 +595,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmflt.vv v0, v12, v8
; CHECK-NEXT: vmflt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -658,10 +646,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfgt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -706,10 +693,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfgt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -754,10 +740,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfgt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -802,10 +787,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfgt.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -850,10 +834,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfgt.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfgt_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfgt.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -898,10 +881,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfgt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -946,10 +928,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfgt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -994,10 +975,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfgt.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1042,10 +1022,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfgt.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfgt_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfgt.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1090,10 +1069,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfgt.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfgt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1138,10 +1116,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfgt.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfgt.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1186,10 +1163,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfgt.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfgt_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfgt.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
index 9594a2e368ad95..77d8dda258961d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v8, v9
; CHECK-NEXT: vmfle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v8, v9
; CHECK-NEXT: vmfle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v8, v9
; CHECK-NEXT: vmfle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfle.vv v0, v8, v10
; CHECK-NEXT: vmfle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfle.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfle_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfle.vv v0, v8, v12
; CHECK-NEXT: vmfle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -294,9 +289,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v8, v9
; CHECK-NEXT: vmfle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -346,9 +340,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v8, v9
; CHECK-NEXT: vmfle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfle.vv v0, v8, v10
; CHECK-NEXT: vmfle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -450,9 +442,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfle.vv v0, v8, v12
; CHECK-NEXT: vmfle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -502,9 +493,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfle.vv v0, v8, v9
; CHECK-NEXT: vmfle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -554,9 +544,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfle.vv v0, v8, v10
; CHECK-NEXT: vmfle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -606,9 +595,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfle.vv v0, v8, v12
; CHECK-NEXT: vmfle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -658,10 +646,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfle.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -706,10 +693,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfle.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -754,10 +740,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfle.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -802,10 +787,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfle.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -850,10 +834,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfle.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfle_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfle.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -898,10 +881,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfle.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -946,10 +928,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfle.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -994,10 +975,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfle.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1042,10 +1022,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfle.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfle_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfle.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1090,10 +1069,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfle.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfle_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfle.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1138,10 +1116,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfle.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfle_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfle.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1186,10 +1163,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfle.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfle_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfle_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfle.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
index 8ebc35d4f01e5f..0fdae8abe8f6b3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v8, v9
; CHECK-NEXT: vmflt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v8, v9
; CHECK-NEXT: vmflt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v8, v9
; CHECK-NEXT: vmflt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmflt.vv v0, v8, v10
; CHECK-NEXT: vmflt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmflt.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmflt_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmflt.vv v0, v8, v12
; CHECK-NEXT: vmflt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -294,9 +289,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v8, v9
; CHECK-NEXT: vmflt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -346,9 +340,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v8, v9
; CHECK-NEXT: vmflt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmflt.vv v0, v8, v10
; CHECK-NEXT: vmflt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -450,9 +442,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmflt.vv v0, v8, v12
; CHECK-NEXT: vmflt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -502,9 +493,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmflt.vv v0, v8, v9
; CHECK-NEXT: vmflt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -554,9 +544,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmflt.vv v0, v8, v10
; CHECK-NEXT: vmflt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -606,9 +595,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmflt.vv v0, v8, v12
; CHECK-NEXT: vmflt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -658,10 +646,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmflt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -706,10 +693,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -754,10 +740,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmflt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -802,10 +787,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmflt.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -850,10 +834,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmflt.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmflt_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmflt.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -898,10 +881,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmflt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -946,10 +928,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmflt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -994,10 +975,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmflt.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1042,10 +1022,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmflt.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmflt_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmflt.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1090,10 +1069,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmflt.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmflt_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmflt.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1138,10 +1116,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmflt.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmflt_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmflt.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1186,10 +1163,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmflt.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmflt_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmflt_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmflt.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
index d5f7387ec1ddce..1d0227f7937282 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f16(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, <vscale x 1 x half> %2, <vscale x 1 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfne.vv v0, v8, v9
; CHECK-NEXT: vmfne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f16(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, <vscale x 2 x half> %2, <vscale x 2 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfne.vv v0, v8, v9
; CHECK-NEXT: vmfne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f16(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, <vscale x 4 x half> %2, <vscale x 4 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfne.vv v0, v8, v9
; CHECK-NEXT: vmfne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f16(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vv_nxv8f16_nxv8f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, <vscale x 8 x half> %2, <vscale x 8 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv8f16_nxv8f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfne.vv v0, v8, v10
; CHECK-NEXT: vmfne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfne.mask.nxv16f16(
define <vscale x 16 x i1> @intrinsic_vmfne_mask_vv_nxv16f16_nxv16f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, <vscale x 16 x half> %2, <vscale x 16 x half> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv16f16_nxv16f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfne.vv v0, v8, v12
; CHECK-NEXT: vmfne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -294,9 +289,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f32(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, <vscale x 1 x float> %2, <vscale x 1 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfne.vv v0, v8, v9
; CHECK-NEXT: vmfne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -346,9 +340,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f32(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, <vscale x 2 x float> %2, <vscale x 2 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfne.vv v0, v8, v9
; CHECK-NEXT: vmfne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f32(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f32_nxv4f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, <vscale x 4 x float> %2, <vscale x 4 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f32_nxv4f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfne.vv v0, v8, v10
; CHECK-NEXT: vmfne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -450,9 +442,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f32(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vv_nxv8f32_nxv8f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, <vscale x 8 x float> %2, <vscale x 8 x float> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv8f32_nxv8f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfne.vv v0, v8, v12
; CHECK-NEXT: vmfne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -502,9 +493,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f64(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, <vscale x 1 x double> %2, <vscale x 1 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmfne.vv v0, v8, v9
; CHECK-NEXT: vmfne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -554,9 +544,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f64(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f64_nxv2f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, <vscale x 2 x double> %2, <vscale x 2 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f64_nxv2f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmfne.vv v0, v8, v10
; CHECK-NEXT: vmfne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -606,9 +595,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f64(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f64_nxv4f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, <vscale x 4 x double> %2, <vscale x 4 x double> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f64_nxv4f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmfne.vv v0, v8, v12
; CHECK-NEXT: vmfne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -658,10 +646,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f16.f16(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f16_f16(<vscale x 1 x i1> %0, <vscale x 1 x half> %1, half %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmfne.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -706,10 +693,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f16.f16(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f16_f16(<vscale x 2 x i1> %0, <vscale x 2 x half> %1, half %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmfne.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -754,10 +740,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f16.f16(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f16_f16(<vscale x 4 x i1> %0, <vscale x 4 x half> %1, half %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmfne.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -802,10 +787,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f16.f16(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vf_nxv8f16_f16(<vscale x 8 x i1> %0, <vscale x 8 x half> %1, half %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv8f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmfne.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -850,10 +834,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmfne.mask.nxv16f16.f16(
define <vscale x 16 x i1> @intrinsic_vmfne_mask_vf_nxv16f16_f16(<vscale x 16 x i1> %0, <vscale x 16 x half> %1, half %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv16f16_f16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmfne.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -898,10 +881,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f32.f32(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f32_f32(<vscale x 1 x i1> %0, <vscale x 1 x float> %1, float %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmfne.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -946,10 +928,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f32.f32(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f32_f32(<vscale x 2 x i1> %0, <vscale x 2 x float> %1, float %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmfne.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -994,10 +975,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f32.f32(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f32_f32(<vscale x 4 x i1> %0, <vscale x 4 x float> %1, float %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmfne.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1042,10 +1022,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmfne.mask.nxv8f32.f32(
define <vscale x 8 x i1> @intrinsic_vmfne_mask_vf_nxv8f32_f32(<vscale x 8 x i1> %0, <vscale x 8 x float> %1, float %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv8f32_f32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmfne.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1090,10 +1069,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmfne.mask.nxv1f64.f64(
define <vscale x 1 x i1> @intrinsic_vmfne_mask_vf_nxv1f64_f64(<vscale x 1 x i1> %0, <vscale x 1 x double> %1, double %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv1f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmfne.vf v10, v8, fa0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1138,10 +1116,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmfne.mask.nxv2f64.f64(
define <vscale x 2 x i1> @intrinsic_vmfne_mask_vf_nxv2f64_f64(<vscale x 2 x i1> %0, <vscale x 2 x double> %1, double %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv2f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmfne.vf v11, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1186,10 +1163,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmfne.mask.nxv4f64.f64(
define <vscale x 4 x i1> @intrinsic_vmfne_mask_vf_nxv4f64_f64(<vscale x 4 x i1> %0, <vscale x 4 x double> %1, double %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmfne_mask_vf_nxv4f64_f64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmfne.vf v13, v8, fa0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll b/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
index 669ed0ba89ab50..80e74faa8cd910 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsbf.ll
@@ -31,10 +31,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsbf.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -74,10 +73,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsbf.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsbf_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -117,10 +115,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsbf.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsbf_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -160,10 +157,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsbf.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsbf_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -203,10 +199,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsbf.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsbf_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -246,10 +241,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsbf.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsbf_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -289,10 +283,9 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsbf.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsbf_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
; CHECK-NEXT: vmsbf.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
index f955fa85149719..6407f39a65e8b3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmseq.vv v0, v8, v10
; CHECK-NEXT: vmseq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmseq.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmseq.vv v0, v8, v12
; CHECK-NEXT: vmseq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmseq.vv v0, v8, v10
; CHECK-NEXT: vmseq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmseq.vv v0, v8, v12
; CHECK-NEXT: vmseq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmseq.vv v0, v8, v10
; CHECK-NEXT: vmseq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmseq.vv v0, v8, v12
; CHECK-NEXT: vmseq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmseq.vv v0, v8, v9
; CHECK-NEXT: vmseq.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmseq.vv v0, v8, v10
; CHECK-NEXT: vmseq.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmseq.vv v0, v8, v12
; CHECK-NEXT: vmseq.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmseq.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmseq.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmseq.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmseq.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmseq.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmseq.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmseq.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmseq.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmseq.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmseq.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmseq.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmseq.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmseq.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmseq.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmseq_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmseq.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmseq_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmseq_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmseq.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmseq.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmseq_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmseq.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmseq.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmseq_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmseq.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmseq.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmseq_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmseq.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmseq_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmseq.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmseq_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmseq.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmseq_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmseq_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmseq.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
index 7675fb91dc261b..45e3840f7e6731 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v10, v8
; CHECK-NEXT: vmsle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsge.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v12, v8
; CHECK-NEXT: vmsle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v10, v8
; CHECK-NEXT: vmsle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v12, v8
; CHECK-NEXT: vmsle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v10, v8
; CHECK-NEXT: vmsle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v12, v8
; CHECK-NEXT: vmsle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v9, v8
; CHECK-NEXT: vmsle.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v10, v8
; CHECK-NEXT: vmsle.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v12, v8
; CHECK-NEXT: vmsle.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -971,10 +953,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1020,10 +1001,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1069,10 +1049,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1090,12 +1069,11 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i8_i8_1(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i8_i8_1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: li a1, 99
+; CHECK-NEXT: li a0, 99
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT: vmsgt.vx v10, v8, a1, v0.t
+; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
entry:
@@ -1158,10 +1136,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1207,10 +1184,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmslt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v11, v10
; CHECK-NEXT: ret
@@ -1256,10 +1232,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsge.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmslt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v13, v12
; CHECK-NEXT: ret
@@ -1305,10 +1280,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1403,10 +1376,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1452,10 +1424,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmslt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v11, v10
; CHECK-NEXT: ret
@@ -1501,10 +1472,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsge.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmslt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v13, v12
; CHECK-NEXT: ret
@@ -1550,10 +1520,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsge.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1599,10 +1568,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsge.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1648,10 +1616,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsge.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmslt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v11, v10
; CHECK-NEXT: ret
@@ -1697,10 +1664,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsge.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmslt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v13, v12
; CHECK-NEXT: ret
@@ -1773,10 +1739,9 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmslt.vx v10, v8, a0, v0.t
; RV64-NEXT: vmxor.mm v0, v10, v9
; RV64-NEXT: ret
@@ -1849,10 +1814,9 @@ define <vscale x 2 x i1> @intrinsic_vmsge_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmslt.vx v11, v8, a0, v0.t
; RV64-NEXT: vmxor.mm v0, v11, v10
; RV64-NEXT: ret
@@ -1925,10 +1889,9 @@ define <vscale x 4 x i1> @intrinsic_vmsge_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsge_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmslt.vx v13, v8, a0, v0.t
; RV64-NEXT: vmxor.mm v0, v13, v12
; RV64-NEXT: ret
@@ -1961,10 +1924,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, -15, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1997,10 +1959,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, -13, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2064,10 +2025,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, -11, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2100,10 +2060,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, -9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2136,10 +2095,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, -7, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2172,10 +2130,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsge_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, -5, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2208,10 +2165,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, -3, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2244,10 +2200,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, -1, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2280,10 +2235,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2316,10 +2270,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 2, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2352,10 +2305,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsge_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 4, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2388,10 +2340,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 6, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2424,10 +2375,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2460,10 +2410,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2496,10 +2445,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsge_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 12, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2532,10 +2480,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsge_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2568,10 +2515,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsge_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 8, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2604,10 +2550,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsge_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsge_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
index 763b6b6167c16d..e42be4faafefcf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v10, v8
; CHECK-NEXT: vmsleu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgeu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v12, v8
; CHECK-NEXT: vmsleu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v10, v8
; CHECK-NEXT: vmsleu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v12, v8
; CHECK-NEXT: vmsleu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v10, v8
; CHECK-NEXT: vmsleu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v12, v8
; CHECK-NEXT: vmsleu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v9, v8
; CHECK-NEXT: vmsleu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v10, v8
; CHECK-NEXT: vmsleu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v12, v8
; CHECK-NEXT: vmsleu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -971,10 +953,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1020,10 +1001,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1069,10 +1049,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1118,10 +1097,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1167,10 +1145,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsltu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v11, v10
; CHECK-NEXT: ret
@@ -1216,10 +1193,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgeu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsltu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v13, v12
; CHECK-NEXT: ret
@@ -1265,10 +1241,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1314,10 +1289,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1363,10 +1337,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1412,10 +1385,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsltu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v11, v10
; CHECK-NEXT: ret
@@ -1461,10 +1433,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgeu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsltu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v13, v12
; CHECK-NEXT: ret
@@ -1510,10 +1481,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgeu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1559,10 +1529,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgeu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v10, v9
; CHECK-NEXT: ret
@@ -1608,10 +1577,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgeu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsltu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v11, v10
; CHECK-NEXT: ret
@@ -1657,10 +1625,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgeu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsltu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmxor.mm v0, v13, v12
; CHECK-NEXT: ret
@@ -1733,10 +1700,9 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsltu.vx v10, v8, a0, v0.t
; RV64-NEXT: vmxor.mm v0, v10, v9
; RV64-NEXT: ret
@@ -1809,10 +1775,9 @@ define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsltu.vx v11, v8, a0, v0.t
; RV64-NEXT: vmxor.mm v0, v11, v10
; RV64-NEXT: ret
@@ -1885,10 +1850,9 @@ define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgeu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsltu.vx v13, v8, a0, v0.t
; RV64-NEXT: vmxor.mm v0, v13, v12
; RV64-NEXT: ret
@@ -1921,10 +1885,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, -15, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1957,10 +1920,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, -13, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1993,10 +1955,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, -11, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2014,12 +1975,11 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i8_i8_1(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i8_i8_1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
-; CHECK-NEXT: li a1, 99
+; CHECK-NEXT: li a0, 99
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT: vmsgtu.vx v10, v8, a1, v0.t
+; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
entry:
@@ -2051,10 +2011,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, -9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2087,10 +2046,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, -7, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2123,10 +2081,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgeu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, -5, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2159,10 +2116,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, -3, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2258,10 +2214,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2294,10 +2249,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, 2, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2330,10 +2284,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgeu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, 4, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2366,10 +2319,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 6, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2402,10 +2354,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2438,10 +2389,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, 10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2474,10 +2424,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, 12, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2510,10 +2459,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 14, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2546,10 +2494,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, -16, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2582,10 +2529,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgeu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, -14, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
index ea54351268288a..62ac44bfdf38cc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v10, v8
; CHECK-NEXT: vmslt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgt.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v12, v8
; CHECK-NEXT: vmslt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v10, v8
; CHECK-NEXT: vmslt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v12, v8
; CHECK-NEXT: vmslt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v10, v8
; CHECK-NEXT: vmslt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v12, v8
; CHECK-NEXT: vmslt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v9, v8
; CHECK-NEXT: vmslt.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v10, v8
; CHECK-NEXT: vmslt.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v12, v8
; CHECK-NEXT: vmslt.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsgt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgt.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsgt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsgt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgt.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsgt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgt.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgt.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsgt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgt.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsgt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgt.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsgt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsgt.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsgt.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgt_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsgt.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgt_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgt_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsgt.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsgt.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgt_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsgt.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
index d59a200a8185f7..d57b9cd5bae533 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v10, v8
; CHECK-NEXT: vmsltu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgtu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v12, v8
; CHECK-NEXT: vmsltu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v10, v8
; CHECK-NEXT: vmsltu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v12, v8
; CHECK-NEXT: vmsltu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v10, v8
; CHECK-NEXT: vmsltu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v12, v8
; CHECK-NEXT: vmsltu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v9, v8
; CHECK-NEXT: vmsltu.vv v11, v10, v9, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v10, v8
; CHECK-NEXT: vmsltu.vv v14, v12, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v12, v8
; CHECK-NEXT: vmsltu.vv v20, v16, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsgtu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsgtu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsgtu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsgtu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsgtu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsgtu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsgtu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsgtu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsgtu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsgtu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsgtu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsgtu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsgtu.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsgtu.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsgtu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsgtu.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsgtu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsgtu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsgtu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsgtu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsgtu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsgtu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsif.ll b/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
index 7a068d1f3b7f8b..9c70dcab1efde6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsif.ll
@@ -31,10 +31,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsif.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsif_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -74,10 +73,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsif.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsif_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -117,10 +115,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsif.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsif_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -160,10 +157,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsif.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsif_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -203,10 +199,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsif.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsif_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -246,10 +241,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsif.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsif_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -289,10 +283,9 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsif.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsif_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsif_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
; CHECK-NEXT: vmsif.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
index 209ec617674bfa..9653dfd2518d80 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v8, v10
; CHECK-NEXT: vmsle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsle.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v8, v12
; CHECK-NEXT: vmsle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v8, v10
; CHECK-NEXT: vmsle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v8, v12
; CHECK-NEXT: vmsle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v8, v10
; CHECK-NEXT: vmsle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v8, v12
; CHECK-NEXT: vmsle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsle.vv v0, v8, v9
; CHECK-NEXT: vmsle.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsle.vv v0, v8, v10
; CHECK-NEXT: vmsle.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsle.vv v0, v8, v12
; CHECK-NEXT: vmsle.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsle.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsle.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsle.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsle.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsle.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsle.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsle.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmsle_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsle.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmsle_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsle_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsle.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsle_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsle_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsle_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsle_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsle_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsle_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsle_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
index 0c5eb11e0b1ca2..25ecfa65c7c487 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v10
; CHECK-NEXT: vmsleu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsleu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v12
; CHECK-NEXT: vmsleu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v10
; CHECK-NEXT: vmsleu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v12
; CHECK-NEXT: vmsleu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v10
; CHECK-NEXT: vmsleu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v12
; CHECK-NEXT: vmsleu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v9
; CHECK-NEXT: vmsleu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v10
; CHECK-NEXT: vmsleu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsleu.vv v0, v8, v12
; CHECK-NEXT: vmsleu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsleu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsleu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsleu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsleu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsleu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsleu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsleu.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsleu.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsleu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsleu.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsleu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsleu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsleu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
index c71fc2a1b3b424..c17495e3b2119f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v8, v10
; CHECK-NEXT: vmslt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmslt.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v8, v12
; CHECK-NEXT: vmslt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v8, v10
; CHECK-NEXT: vmslt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v8, v12
; CHECK-NEXT: vmslt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v8, v10
; CHECK-NEXT: vmslt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v8, v12
; CHECK-NEXT: vmslt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmslt.vv v0, v8, v9
; CHECK-NEXT: vmslt.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmslt.vv v0, v8, v10
; CHECK-NEXT: vmslt.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmslt.vv v0, v8, v12
; CHECK-NEXT: vmslt.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmslt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmslt.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmslt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmslt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmslt.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmslt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmslt.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmslt.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmslt.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmslt.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmslt.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmslt.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmslt.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmslt_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmslt.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmslt_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmslt_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmslt.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, -15, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, -13, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, -11, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, -9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, -7, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmslt_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, -5, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, -3, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmslt.vx v10, v8, zero, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 2, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmslt_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 4, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 6, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmslt_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 12, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmslt_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsle.vi v10, v8, 8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmslt_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsle.vi v11, v8, 8, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmslt_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmslt_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsle.vi v13, v8, 8, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
index 52bb3c746b7dca..a37a02848365d8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v10
; CHECK-NEXT: vmsltu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsltu.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v12
; CHECK-NEXT: vmsltu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v10
; CHECK-NEXT: vmsltu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v12
; CHECK-NEXT: vmsltu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v10
; CHECK-NEXT: vmsltu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v12
; CHECK-NEXT: vmsltu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v9
; CHECK-NEXT: vmsltu.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v10
; CHECK-NEXT: vmsltu.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsltu.vv v0, v8, v12
; CHECK-NEXT: vmsltu.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsltu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsltu.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsltu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsltu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsltu.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsltu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsltu.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsltu.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsltu.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsltu.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsltu.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsltu.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsltu.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsltu.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsltu_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsltu.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, -15, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, -13, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, -11, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, -9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, -7, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsltu_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, -5, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, -3, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsltu.vx v10, v8, zero, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, 2, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsltu_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, 4, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 6, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, 10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, 12, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsleu.vi v10, v8, 14, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsleu.vi v11, v8, -16, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsltu_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsleu.vi v13, v8, -14, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
index 15a496baed9888..ed41a18dcc8d3a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
@@ -34,9 +34,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i8(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i8> %2, <vscale x 1 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -86,9 +85,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i8(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i8> %2, <vscale x 2 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -138,9 +136,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i8(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i8> %2, <vscale x 4 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -190,9 +187,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i8(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i8> %2, <vscale x 8 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -242,9 +238,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i8(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vv_nxv16i8_nxv16i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2, <vscale x 16 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv16i8_nxv16i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsne.vv v0, v8, v10
; CHECK-NEXT: vmsne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -294,9 +289,8 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsne.mask.nxv32i8(
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vv_nxv32i8_nxv32i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i8> %2, <vscale x 32 x i8> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv32i8_nxv32i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsne.vv v0, v8, v12
; CHECK-NEXT: vmsne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -346,9 +340,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i16(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i16> %2, <vscale x 1 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -398,9 +391,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i16(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i16> %2, <vscale x 2 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -450,9 +442,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i16(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i16> %2, <vscale x 4 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -502,9 +493,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i16(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i16_nxv8i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2, <vscale x 8 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i16_nxv8i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsne.vv v0, v8, v10
; CHECK-NEXT: vmsne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -554,9 +544,8 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i16(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vv_nxv16i16_nxv16i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i16> %2, <vscale x 16 x i16> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv16i16_nxv16i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsne.vv v0, v8, v12
; CHECK-NEXT: vmsne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -606,9 +595,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i32(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i32> %2, <vscale x 1 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
@@ -658,9 +646,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i32(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i32> %2, <vscale x 2 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -710,9 +697,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i32(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i32_nxv4i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2, <vscale x 4 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i32_nxv4i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsne.vv v0, v8, v10
; CHECK-NEXT: vmsne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -762,9 +748,8 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i32(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i32_nxv8i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i32> %2, <vscale x 8 x i32> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i32_nxv8i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsne.vv v0, v8, v12
; CHECK-NEXT: vmsne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -814,9 +799,8 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i64(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i64> %2, <vscale x 1 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
+; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmsne.vv v0, v8, v9
; CHECK-NEXT: vmsne.vv v11, v9, v10, v0.t
; CHECK-NEXT: vmv.v.v v0, v11
@@ -866,9 +850,8 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i64(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i64_nxv2i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2, <vscale x 2 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i64_nxv2i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
+; CHECK-NEXT: vmv1r.v v14, v0
; CHECK-NEXT: vmsne.vv v0, v8, v10
; CHECK-NEXT: vmsne.vv v14, v10, v12, v0.t
; CHECK-NEXT: vmv1r.v v0, v14
@@ -918,9 +901,8 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i64(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i64_nxv4i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i64> %2, <vscale x 4 x i64> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i64_nxv4i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
+; CHECK-NEXT: vmv1r.v v20, v0
; CHECK-NEXT: vmsne.vv v0, v8, v12
; CHECK-NEXT: vmsne.vv v20, v12, v16, v0.t
; CHECK-NEXT: vmv1r.v v0, v20
@@ -970,10 +952,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i8.i8(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, i8 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1018,10 +999,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i8.i8(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, i8 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1066,10 +1046,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i8.i8(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, i8 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1114,10 +1093,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i8.i8(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, i8 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1162,10 +1140,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i8.i8(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vx_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, i8 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
; CHECK-NEXT: vmsne.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1210,10 +1187,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsne.mask.nxv32i8.i8(
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vx_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, i8 %2, <vscale x 32 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, mu
; CHECK-NEXT: vmsne.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1258,10 +1234,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i16.i16(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, i16 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1306,10 +1281,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i16.i16(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, i16 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1354,10 +1328,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i16.i16(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, i16 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1402,10 +1375,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i16.i16(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, i16 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, mu
; CHECK-NEXT: vmsne.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1450,10 +1422,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsne.mask.nxv16i16.i16(
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vx_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, i16 %2, <vscale x 16 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e16, m4, ta, mu
; CHECK-NEXT: vmsne.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1498,10 +1469,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsne.mask.nxv1i32.i32(
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, i32 %2, <vscale x 1 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1546,10 +1516,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsne.mask.nxv2i32.i32(
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, i32 %2, <vscale x 2 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, mu
; CHECK-NEXT: vmsne.vx v10, v8, a0, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -1594,10 +1563,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsne.mask.nxv4i32.i32(
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, i32 %2, <vscale x 4 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, mu
; CHECK-NEXT: vmsne.vx v11, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -1642,10 +1610,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsne.mask.nxv8i32.i32(
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vx_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, i32 %2, <vscale x 8 x i1> %3, iXLen %4) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vx_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, mu
; CHECK-NEXT: vmsne.vx v13, v8, a0, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -1717,10 +1684,9 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vx_nxv1i64_i64(<vscale x 1 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv1i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmv1r.v v10, v0
; RV64-NEXT: vmv1r.v v0, v9
-; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, mu
; RV64-NEXT: vmsne.vx v10, v8, a0, v0.t
; RV64-NEXT: vmv.v.v v0, v10
; RV64-NEXT: ret
@@ -1792,10 +1758,9 @@ define <vscale x 2 x i1> @intrinsic_vmsne_mask_vx_nxv2i64_i64(<vscale x 2 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv2i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmv1r.v v11, v0
; RV64-NEXT: vmv1r.v v0, v10
-; RV64-NEXT: vsetvli zero, a1, e64, m2, ta, mu
; RV64-NEXT: vmsne.vx v11, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v11
; RV64-NEXT: ret
@@ -1867,10 +1832,9 @@ define <vscale x 4 x i1> @intrinsic_vmsne_mask_vx_nxv4i64_i64(<vscale x 4 x i1>
;
; RV64-LABEL: intrinsic_vmsne_mask_vx_nxv4i64_i64:
; RV64: # %bb.0: # %entry
-; RV64-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmv1r.v v13, v0
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e64, m4, ta, mu
; RV64-NEXT: vmsne.vx v13, v8, a0, v0.t
; RV64-NEXT: vmv1r.v v0, v13
; RV64-NEXT: ret
@@ -1903,10 +1867,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i8_i8(<vscale x 1 x i1> %0, <vscale x 1 x i8> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1939,10 +1902,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i8_i8(<vscale x 2 x i1> %0, <vscale x 2 x i8> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -1975,10 +1937,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i8_i8(<vscale x 4 x i1> %0, <vscale x 4 x i8> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2011,10 +1972,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i8_i8(<vscale x 8 x i1> %0, <vscale x 8 x i8> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2047,10 +2007,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vi_nxv16i8_i8(<vscale x 16 x i1> %0, <vscale x 16 x i8> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv16i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsne.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2083,10 +2042,9 @@ entry:
define <vscale x 32 x i1> @intrinsic_vmsne_mask_vi_nxv32i8_i8(<vscale x 32 x i1> %0, <vscale x 32 x i8> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv32i8_i8:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsne.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2119,10 +2077,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i16_i16(<vscale x 1 x i1> %0, <vscale x 1 x i16> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2155,10 +2112,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i16_i16(<vscale x 2 x i1> %0, <vscale x 2 x i16> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2191,10 +2147,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i16_i16(<vscale x 4 x i1> %0, <vscale x 4 x i16> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2227,10 +2182,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i16_i16(<vscale x 8 x i1> %0, <vscale x 8 x i16> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, mu
; CHECK-NEXT: vmsne.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2263,10 +2217,9 @@ entry:
define <vscale x 16 x i1> @intrinsic_vmsne_mask_vi_nxv16i16_i16(<vscale x 16 x i1> %0, <vscale x 16 x i16> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv16i16_i16:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, mu
; CHECK-NEXT: vmsne.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2299,10 +2252,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i32_i32(<vscale x 1 x i1> %0, <vscale x 1 x i32> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -2335,10 +2287,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i32_i32(<vscale x 2 x i1> %0, <vscale x 2 x i32> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m1, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2371,10 +2322,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i32_i32(<vscale x 4 x i1> %0, <vscale x 4 x i32> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, mu
; CHECK-NEXT: vmsne.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2407,10 +2357,9 @@ entry:
define <vscale x 8 x i1> @intrinsic_vmsne_mask_vi_nxv8i32_i32(<vscale x 8 x i1> %0, <vscale x 8 x i32> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv8i32_i32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, mu
; CHECK-NEXT: vmsne.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
@@ -2443,10 +2392,9 @@ entry:
define <vscale x 1 x i1> @intrinsic_vmsne_mask_vi_nxv1i64_i64(<vscale x 1 x i1> %0, <vscale x 1 x i64> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv1i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, mu
; CHECK-NEXT: vmsne.vi v10, v8, 9, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -2479,10 +2427,9 @@ entry:
define <vscale x 2 x i1> @intrinsic_vmsne_mask_vi_nxv2i64_i64(<vscale x 2 x i1> %0, <vscale x 2 x i64> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv2i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmv1r.v v11, v0
; CHECK-NEXT: vmv1r.v v0, v10
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, mu
; CHECK-NEXT: vmsne.vi v11, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v11
; CHECK-NEXT: ret
@@ -2515,10 +2462,9 @@ entry:
define <vscale x 4 x i1> @intrinsic_vmsne_mask_vi_nxv4i64_i64(<vscale x 4 x i1> %0, <vscale x 4 x i64> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsne_mask_vi_nxv4i64_i64:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmv1r.v v13, v0
; CHECK-NEXT: vmv1r.v v0, v12
-; CHECK-NEXT: vsetvli zero, a0, e64, m4, ta, mu
; CHECK-NEXT: vmsne.vi v13, v8, 9, v0.t
; CHECK-NEXT: vmv1r.v v0, v13
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsof.ll b/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
index c243bd647469e0..4b818a2b1e58f5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsof.ll
@@ -31,10 +31,9 @@ declare <vscale x 1 x i1> @llvm.riscv.vmsof.mask.nxv1i1(
define <vscale x 1 x i1> @intrinsic_vmsof_mask_m_nxv1i1_nxv1i1(<vscale x 1 x i1> %0, <vscale x 1 x i1> %1, <vscale x 1 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv1i1_nxv1i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -74,10 +73,9 @@ declare <vscale x 2 x i1> @llvm.riscv.vmsof.mask.nxv2i1(
define <vscale x 2 x i1> @intrinsic_vmsof_mask_m_nxv2i1_nxv2i1(<vscale x 2 x i1> %0, <vscale x 2 x i1> %1, <vscale x 2 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv2i1_nxv2i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -117,10 +115,9 @@ declare <vscale x 4 x i1> @llvm.riscv.vmsof.mask.nxv4i1(
define <vscale x 4 x i1> @intrinsic_vmsof_mask_m_nxv4i1_nxv4i1(<vscale x 4 x i1> %0, <vscale x 4 x i1> %1, <vscale x 4 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv4i1_nxv4i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -160,10 +157,9 @@ declare <vscale x 8 x i1> @llvm.riscv.vmsof.mask.nxv8i1(
define <vscale x 8 x i1> @intrinsic_vmsof_mask_m_nxv8i1_nxv8i1(<vscale x 8 x i1> %0, <vscale x 8 x i1> %1, <vscale x 8 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv8i1_nxv8i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv.v.v v0, v10
; CHECK-NEXT: ret
@@ -203,10 +199,9 @@ declare <vscale x 16 x i1> @llvm.riscv.vmsof.mask.nxv16i1(
define <vscale x 16 x i1> @intrinsic_vmsof_mask_m_nxv16i1_nxv16i1(<vscale x 16 x i1> %0, <vscale x 16 x i1> %1, <vscale x 16 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv16i1_nxv16i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -246,10 +241,9 @@ declare <vscale x 32 x i1> @llvm.riscv.vmsof.mask.nxv32i1(
define <vscale x 32 x i1> @intrinsic_vmsof_mask_m_nxv32i1_nxv32i1(<vscale x 32 x i1> %0, <vscale x 32 x i1> %1, <vscale x 32 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv32i1_nxv32i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
@@ -289,10 +283,9 @@ declare <vscale x 64 x i1> @llvm.riscv.vmsof.mask.nxv64i1(
define <vscale x 64 x i1> @intrinsic_vmsof_mask_m_nxv64i1_nxv64i1(<vscale x 64 x i1> %0, <vscale x 64 x i1> %1, <vscale x 64 x i1> %2, iXLen %3) nounwind {
; CHECK-LABEL: intrinsic_vmsof_mask_m_nxv64i1_nxv64i1:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v9
-; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, mu
; CHECK-NEXT: vmsof.m v10, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v10
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll b/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
index 09d4aadf968f95..6345b90db23b8a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmv.v.v-peephole.ll
@@ -49,9 +49,8 @@ define <vscale x 4 x i32> @vadd_same_passthru(<vscale x 4 x i32> %passthru, <vsc
define <vscale x 4 x i32> @unfoldable_diff_avl_unknown(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: unfoldable_diff_avl_unknown:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT: vmv2r.v v14, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
+; CHECK-NEXT: vmv2r.v v14, v8
; CHECK-NEXT: vadd.vv v14, v10, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; CHECK-NEXT: vmv.v.v v8, v14
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll b/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
index 9028b82a70dd51..b316f5f878816b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-cttz-elts.ll
@@ -5,10 +5,9 @@
define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
; RV32-LABEL: bool_vec:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; RV32-NEXT: vmv1r.v v9, v0
; RV32-NEXT: vmv1r.v v0, v8
-; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; RV32-NEXT: vfirst.m a1, v9, v0.t
; RV32-NEXT: bltz a1, .LBB0_2
; RV32-NEXT: # %bb.1:
@@ -37,10 +36,9 @@ define iXLen @bool_vec(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
define iXLen @bool_vec_zero_poison(<vscale x 2 x i1> %src, <vscale x 2 x i1> %m, i32 %evl) {
; RV32-LABEL: bool_vec_zero_poison:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; RV32-NEXT: vmv1r.v v9, v0
; RV32-NEXT: vmv1r.v v0, v8
-; RV32-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
; RV32-NEXT: vfirst.m a0, v9, v0.t
; RV32-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
index c4084c9225e876..745cec4e7c4f68 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-fixed-vectors.ll
@@ -10,10 +10,9 @@ declare <16 x i1> @llvm.experimental.vp.splice.v16i1(<16 x i1>, <16 x i1>, i32,
define <2 x i1> @test_vp_splice_v2i1(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -35,10 +34,9 @@ define <2 x i1> @test_vp_splice_v2i1(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %ev
define <2 x i1> @test_vp_splice_v2i1_negative_offset(<2 x i1> %va, <2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -60,10 +58,9 @@ define <2 x i1> @test_vp_splice_v2i1_negative_offset(<2 x i1> %va, <2 x i1> %vb,
define <2 x i1> @test_vp_splice_v2i1_masked(<2 x i1> %va, <2 x i1> %vb, <2 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v2i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -86,10 +83,9 @@ define <2 x i1> @test_vp_splice_v2i1_masked(<2 x i1> %va, <2 x i1> %vb, <2 x i1>
define <4 x i1> @test_vp_splice_v4i1(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -111,10 +107,9 @@ define <4 x i1> @test_vp_splice_v4i1(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %ev
define <4 x i1> @test_vp_splice_v4i1_negative_offset(<4 x i1> %va, <4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -136,10 +131,9 @@ define <4 x i1> @test_vp_splice_v4i1_negative_offset(<4 x i1> %va, <4 x i1> %vb,
define <4 x i1> @test_vp_splice_v4i1_masked(<4 x i1> %va, <4 x i1> %vb, <4 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v4i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -162,10 +156,9 @@ define <4 x i1> @test_vp_splice_v4i1_masked(<4 x i1> %va, <4 x i1> %vb, <4 x i1>
define <8 x i1> @test_vp_splice_v8i1(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -187,10 +180,9 @@ define <8 x i1> @test_vp_splice_v8i1(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %ev
define <8 x i1> @test_vp_splice_v8i1_negative_offset(<8 x i1> %va, <8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -212,10 +204,9 @@ define <8 x i1> @test_vp_splice_v8i1_negative_offset(<8 x i1> %va, <8 x i1> %vb,
define <8 x i1> @test_vp_splice_v8i1_masked(<8 x i1> %va, <8 x i1> %vb, <8 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v8i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -238,10 +229,9 @@ define <8 x i1> @test_vp_splice_v8i1_masked(<8 x i1> %va, <8 x i1> %vb, <8 x i1>
define <16 x i1> @test_vp_splice_v16i1(<16 x i1> %va, <16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -263,10 +253,9 @@ define <16 x i1> @test_vp_splice_v16i1(<16 x i1> %va, <16 x i1> %vb, i32 zeroext
define <16 x i1> @test_vp_splice_v16i1_negative_offset(<16 x i1> %va, <16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -288,10 +277,9 @@ define <16 x i1> @test_vp_splice_v16i1_negative_offset(<16 x i1> %va, <16 x i1>
define <16 x i1> @test_vp_splice_v16i1_masked(<16 x i1> %va, <16 x i1> %vb, <16 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_v16i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
index 1605fe0c4c7f55..3b0b183537468e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-splice-mask-vectors.ll
@@ -13,10 +13,9 @@ declare <vscale x 64 x i1> @llvm.experimental.vp.splice.nxv64i1(<vscale x 64 x i
define <vscale x 1 x i1> @test_vp_splice_nxv1i1(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -38,10 +37,9 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1(<vscale x 1 x i1> %va, <vscale x
define <vscale x 1 x i1> @test_vp_splice_nxv1i1_negative_offset(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -63,10 +61,9 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1_negative_offset(<vscale x 1 x i1
define <vscale x 1 x i1> @test_vp_splice_nxv1i1_masked(<vscale x 1 x i1> %va, <vscale x 1 x i1> %vb, <vscale x 1 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv1i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf8, ta, ma
@@ -89,10 +86,9 @@ define <vscale x 1 x i1> @test_vp_splice_nxv1i1_masked(<vscale x 1 x i1> %va, <v
define <vscale x 2 x i1> @test_vp_splice_nxv2i1(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -114,10 +110,9 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1(<vscale x 2 x i1> %va, <vscale x
define <vscale x 2 x i1> @test_vp_splice_nxv2i1_negative_offset(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -139,10 +134,9 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1_negative_offset(<vscale x 2 x i1
define <vscale x 2 x i1> @test_vp_splice_nxv2i1_masked(<vscale x 2 x i1> %va, <vscale x 2 x i1> %vb, <vscale x 2 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv2i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf4, ta, ma
@@ -165,10 +159,9 @@ define <vscale x 2 x i1> @test_vp_splice_nxv2i1_masked(<vscale x 2 x i1> %va, <v
define <vscale x 4 x i1> @test_vp_splice_nxv4i1(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -190,10 +183,9 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1(<vscale x 4 x i1> %va, <vscale x
define <vscale x 4 x i1> @test_vp_splice_nxv4i1_negative_offset(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -215,10 +207,9 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1_negative_offset(<vscale x 4 x i1
define <vscale x 4 x i1> @test_vp_splice_nxv4i1_masked(<vscale x 4 x i1> %va, <vscale x 4 x i1> %vb, <vscale x 4 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv4i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
@@ -241,10 +232,9 @@ define <vscale x 4 x i1> @test_vp_splice_nxv4i1_masked(<vscale x 4 x i1> %va, <v
define <vscale x 8 x i1> @test_vp_splice_nxv8i1(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -266,10 +256,9 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1(<vscale x 8 x i1> %va, <vscale x
define <vscale x 8 x i1> @test_vp_splice_nxv8i1_negative_offset(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -291,10 +280,9 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1_negative_offset(<vscale x 8 x i1
define <vscale x 8 x i1> @test_vp_splice_nxv8i1_masked(<vscale x 8 x i1> %va, <vscale x 8 x i1> %vb, <vscale x 8 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv8i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
@@ -317,10 +305,9 @@ define <vscale x 8 x i1> @test_vp_splice_nxv8i1_masked(<vscale x 8 x i1> %va, <v
define <vscale x 16 x i1> @test_vp_splice_nxv16i1(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: vmerge.vim v10, v10, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -342,10 +329,9 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1(<vscale x 16 x i1> %va, <vscal
define <vscale x 16 x i1> @test_vp_splice_nxv16i1_negative_offset(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: vmerge.vim v10, v10, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -367,10 +353,9 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1_negative_offset(<vscale x 16 x
define <vscale x 16 x i1> @test_vp_splice_nxv16i1_masked(<vscale x 16 x i1> %va, <vscale x 16 x i1> %vb, <vscale x 16 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv16i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv.v.i v12, 0
; CHECK-NEXT: vmerge.vim v12, v12, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
@@ -394,10 +379,9 @@ define <vscale x 16 x i1> @test_vp_splice_nxv16i1_masked(<vscale x 16 x i1> %va,
define <vscale x 32 x i1> @test_vp_splice_nxv32i1(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv.v.i v12, 0
; CHECK-NEXT: vmerge.vim v12, v12, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -419,10 +403,9 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1(<vscale x 32 x i1> %va, <vscal
define <vscale x 32 x i1> @test_vp_splice_nxv32i1_negative_offset(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv.v.i v12, 0
; CHECK-NEXT: vmerge.vim v12, v12, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -444,10 +427,9 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1_negative_offset(<vscale x 32 x
define <vscale x 32 x i1> @test_vp_splice_nxv32i1_masked(<vscale x 32 x i1> %va, <vscale x 32 x i1> %vb, <vscale x 32 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv32i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv.v.i v12, 0
; CHECK-NEXT: vmerge.vim v12, v12, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
@@ -471,10 +453,9 @@ define <vscale x 32 x i1> @test_vp_splice_nxv32i1_masked(<vscale x 32 x i1> %va,
define <vscale x 64 x i1> @test_vp_splice_nxv64i1(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv.v.i v16, 0
; CHECK-NEXT: vmerge.vim v16, v16, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
@@ -496,10 +477,9 @@ define <vscale x 64 x i1> @test_vp_splice_nxv64i1(<vscale x 64 x i1> %va, <vscal
define <vscale x 64 x i1> @test_vp_splice_nxv64i1_negative_offset(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1_negative_offset:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv.v.i v16, 0
; CHECK-NEXT: vmerge.vim v16, v16, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
@@ -521,10 +501,9 @@ define <vscale x 64 x i1> @test_vp_splice_nxv64i1_negative_offset(<vscale x 64 x
define <vscale x 64 x i1> @test_vp_splice_nxv64i1_masked(<vscale x 64 x i1> %va, <vscale x 64 x i1> %vb, <vscale x 64 x i1> %mask, i32 zeroext %evla, i32 zeroext %evlb) {
; CHECK-LABEL: test_vp_splice_nxv64i1_masked:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v10, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv.v.i v16, 0
; CHECK-NEXT: vmerge.vim v16, v16, 1, v0
; CHECK-NEXT: vsetvli zero, a0, e8, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
index f2234c69e14315..18d20f66987b27 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vreductions-mask-vp.ll
@@ -23,10 +23,9 @@ declare i1 @llvm.vp.reduce.or.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>, i
define zeroext i1 @vpreduce_or_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -40,10 +39,9 @@ declare i1 @llvm.vp.reduce.xor.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_xor_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -73,10 +71,9 @@ declare i1 @llvm.vp.reduce.or.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>, i
define zeroext i1 @vpreduce_or_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -90,10 +87,9 @@ declare i1 @llvm.vp.reduce.xor.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_xor_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -123,10 +119,9 @@ declare i1 @llvm.vp.reduce.or.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>, i
define zeroext i1 @vpreduce_or_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -140,10 +135,9 @@ declare i1 @llvm.vp.reduce.xor.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_xor_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -173,10 +167,9 @@ declare i1 @llvm.vp.reduce.or.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>, i
define zeroext i1 @vpreduce_or_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -190,10 +183,9 @@ declare i1 @llvm.vp.reduce.xor.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_xor_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -223,10 +215,9 @@ declare i1 @llvm.vp.reduce.or.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1>
define zeroext i1 @vpreduce_or_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -240,10 +231,9 @@ declare i1 @llvm.vp.reduce.xor.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1
define zeroext i1 @vpreduce_xor_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -273,10 +263,9 @@ declare i1 @llvm.vp.reduce.or.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1>
define zeroext i1 @vpreduce_or_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -290,10 +279,9 @@ declare i1 @llvm.vp.reduce.xor.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1
define zeroext i1 @vpreduce_xor_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -307,10 +295,9 @@ declare i1 @llvm.vp.reduce.or.nxv40i1(i1, <vscale x 40 x i1>, <vscale x 40 x i1>
define zeroext i1 @vpreduce_or_nxv40i1(i1 zeroext %s, <vscale x 40 x i1> %v, <vscale x 40 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv40i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -340,10 +327,9 @@ declare i1 @llvm.vp.reduce.or.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1>
define zeroext i1 @vpreduce_or_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_or_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -357,10 +343,9 @@ declare i1 @llvm.vp.reduce.xor.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1
define zeroext i1 @vpreduce_xor_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_xor_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -406,10 +391,9 @@ declare i1 @llvm.vp.reduce.add.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_add_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -423,10 +407,9 @@ declare i1 @llvm.vp.reduce.add.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_add_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -440,10 +423,9 @@ declare i1 @llvm.vp.reduce.add.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_add_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -457,10 +439,9 @@ declare i1 @llvm.vp.reduce.add.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_add_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -474,10 +455,9 @@ declare i1 @llvm.vp.reduce.add.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i1
define zeroext i1 @vpreduce_add_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -491,10 +471,9 @@ declare i1 @llvm.vp.reduce.add.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i1
define zeroext i1 @vpreduce_add_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -508,10 +487,9 @@ declare i1 @llvm.vp.reduce.add.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i1
define zeroext i1 @vpreduce_add_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_add_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: xor a0, a1, a0
@@ -638,10 +616,9 @@ declare i1 @llvm.vp.reduce.smin.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_smin_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -655,10 +632,9 @@ declare i1 @llvm.vp.reduce.smin.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_smin_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -672,10 +648,9 @@ declare i1 @llvm.vp.reduce.smin.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_smin_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -689,10 +664,9 @@ declare i1 @llvm.vp.reduce.smin.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_smin_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -706,10 +680,9 @@ declare i1 @llvm.vp.reduce.smin.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i
define zeroext i1 @vpreduce_smin_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -723,10 +696,9 @@ declare i1 @llvm.vp.reduce.smin.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i
define zeroext i1 @vpreduce_smin_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -740,10 +712,9 @@ declare i1 @llvm.vp.reduce.smin.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i
define zeroext i1 @vpreduce_smin_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_smin_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -757,10 +728,9 @@ declare i1 @llvm.vp.reduce.umax.nxv1i1(i1, <vscale x 1 x i1>, <vscale x 1 x i1>,
define zeroext i1 @vpreduce_umax_nxv1i1(i1 zeroext %s, <vscale x 1 x i1> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv1i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -774,10 +744,9 @@ declare i1 @llvm.vp.reduce.umax.nxv2i1(i1, <vscale x 2 x i1>, <vscale x 2 x i1>,
define zeroext i1 @vpreduce_umax_nxv2i1(i1 zeroext %s, <vscale x 2 x i1> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv2i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -791,10 +760,9 @@ declare i1 @llvm.vp.reduce.umax.nxv4i1(i1, <vscale x 4 x i1>, <vscale x 4 x i1>,
define zeroext i1 @vpreduce_umax_nxv4i1(i1 zeroext %s, <vscale x 4 x i1> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv4i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -808,10 +776,9 @@ declare i1 @llvm.vp.reduce.umax.nxv8i1(i1, <vscale x 8 x i1>, <vscale x 8 x i1>,
define zeroext i1 @vpreduce_umax_nxv8i1(i1 zeroext %s, <vscale x 8 x i1> %v, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv8i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -825,10 +792,9 @@ declare i1 @llvm.vp.reduce.umax.nxv16i1(i1, <vscale x 16 x i1>, <vscale x 16 x i
define zeroext i1 @vpreduce_umax_nxv16i1(i1 zeroext %s, <vscale x 16 x i1> %v, <vscale x 16 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv16i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -842,10 +808,9 @@ declare i1 @llvm.vp.reduce.umax.nxv32i1(i1, <vscale x 32 x i1>, <vscale x 32 x i
define zeroext i1 @vpreduce_umax_nxv32i1(i1 zeroext %s, <vscale x 32 x i1> %v, <vscale x 32 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv32i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m4, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
@@ -859,10 +824,9 @@ declare i1 @llvm.vp.reduce.umax.nxv64i1(i1, <vscale x 64 x i1>, <vscale x 64 x i
define zeroext i1 @vpreduce_umax_nxv64i1(i1 zeroext %s, <vscale x 64 x i1> %v, <vscale x 64 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv64i1:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vmv1r.v v9, v0
; CHECK-NEXT: vmv1r.v v0, v8
-; CHECK-NEXT: vsetvli zero, a1, e8, m8, ta, ma
; CHECK-NEXT: vcpop.m a1, v9, v0.t
; CHECK-NEXT: snez a1, a1
; CHECK-NEXT: or a0, a1, a0
>From c27fba571b7c9c7d9107dd2b846ed99a782fa5d8 Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Tue, 3 Dec 2024 07:10:30 +0800
Subject: [PATCH 6/9] Label sew and ta/ma
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 7d74c4f61f4fbd..d1ee0e03dcbc51 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -1238,7 +1238,8 @@ void RISCVInsertVSETVLI::transferBefore(VSETVLIInfo &Info,
// be coalesced into another vsetvli since we won't demand any fields.
VSETVLIInfo NewInfo; // Need a new VSETVLIInfo to clear SEWLMULRatioOnly
NewInfo.setAVLImm(1);
- NewInfo.setVTYPE(RISCVII::VLMUL::LMUL_1, 8, true, true);
+ NewInfo.setVTYPE(RISCVII::VLMUL::LMUL_1, /*sew*/ 8, /*ta*/ true,
+ /*ma*/ true);
Info = NewInfo;
return;
}
>From 80256ff2abcbe2b18e8d7aa97140d06fdd6c6139 Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Tue, 3 Dec 2024 07:38:20 +0800
Subject: [PATCH 7/9] Add an implicit $vtype use on vector copies
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 22 +++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index d1ee0e03dcbc51..9af310a8443888 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -523,9 +523,14 @@ DemandedFields getDemanded(const MachineInstr &MI, const RISCVSubtarget *ST) {
//
// However it does need valid SEW, i.e. vill must be cleared. The entry to a
// function, calls and inline assembly may all set it, so make sure we clear
- // it for whole register copies.
- if (isVectorCopy(ST->getRegisterInfo(), MI))
- Res.VILL = true;
+ // it for whole register copies. Do this by leaving VILL demanded.
+ if (isVectorCopy(ST->getRegisterInfo(), MI)) {
+ Res.LMUL = DemandedFields::LMULNone;
+ Res.SEW = DemandedFields::SEWNone;
+ Res.SEWLMULRatio = false;
+ Res.TailPolicy = false;
+ Res.MaskPolicy = false;
+ }
return Res;
}
@@ -1463,10 +1468,13 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
PrefixTransparent = false;
}
- if (isVectorCopy(ST->getRegisterInfo(), MI) &&
- !PrevInfo.isCompatible(DemandedFields::all(), CurInfo, LIS)) {
- insertVSETVLI(MBB, MI, MI.getDebugLoc(), CurInfo, PrevInfo);
- PrefixTransparent = false;
+ if (isVectorCopy(ST->getRegisterInfo(), MI)) {
+ if (!PrevInfo.isCompatible(DemandedFields::all(), CurInfo, LIS)) {
+ insertVSETVLI(MBB, MI, MI.getDebugLoc(), CurInfo, PrevInfo);
+ PrefixTransparent = false;
+ }
+ MI.addOperand(MachineOperand::CreateReg(RISCV::VTYPE, /*isDef*/ false,
+ /*isImp*/ true));
}
uint64_t TSFlags = MI.getDesc().TSFlags;
>From a83299d6271fbadfc4ae706c8dda4b9a99d59720 Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Wed, 4 Dec 2024 15:35:36 +0800
Subject: [PATCH 8/9] Add
-riscv-insert-vsetvli-whole-vector-register-move-valid-vtype flag, enabled by
default
---
llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
index 9af310a8443888..0f952384549ed5 100644
--- a/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp
@@ -40,6 +40,12 @@ using namespace llvm;
STATISTIC(NumInsertedVSETVL, "Number of VSETVL inst inserted");
STATISTIC(NumCoalescedVSETVL, "Number of VSETVL inst coalesced");
+static cl::opt<bool> EnsureWholeVectorRegisterMoveValidVTYPE(
+ DEBUG_TYPE "-whole-vector-register-move-valid-vtype", cl::Hidden,
+ cl::desc("Insert vsetvlis before vmvNr.vs to ensure vtype is valid and "
+ "vill is cleared"),
+ cl::init(true));
+
namespace {
/// Given a virtual register \p Reg, return the corresponding VNInfo for it.
@@ -1468,7 +1474,8 @@ void RISCVInsertVSETVLI::emitVSETVLIs(MachineBasicBlock &MBB) {
PrefixTransparent = false;
}
- if (isVectorCopy(ST->getRegisterInfo(), MI)) {
+ if (EnsureWholeVectorRegisterMoveValidVTYPE &&
+ isVectorCopy(ST->getRegisterInfo(), MI)) {
if (!PrevInfo.isCompatible(DemandedFields::all(), CurInfo, LIS)) {
insertVSETVLI(MBB, MI, MI.getDebugLoc(), CurInfo, PrevInfo);
PrefixTransparent = false;
>From 48047eb930ebda5fa26303fb90757669c5020e5c Mon Sep 17 00:00:00 2001
From: Luke Lau <luke at igalia.com>
Date: Fri, 6 Dec 2024 00:46:22 +0800
Subject: [PATCH 9/9] Update tests after rebase
---
llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
index 97e458e70565ce..99aaecf4c6843d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
@@ -38,8 +38,8 @@ define <4 x float> @interleave_v2f32(<2 x float> %x, <2 x float> %y) {
define <4 x double> @interleave_v2f64(<2 x double> %x, <2 x double> %y) {
; V128-LABEL: interleave_v2f64:
; V128: # %bb.0:
-; V128-NEXT: vmv1r.v v12, v9
; V128-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
+; V128-NEXT: vmv1r.v v12, v9
; V128-NEXT: vid.v v9
; V128-NEXT: vmv.v.i v0, 10
; V128-NEXT: vsrl.vi v14, v9, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
index a8eb1f97fd1a2c..7500198f140022 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
@@ -51,8 +51,8 @@ define <4 x i32> @interleave_v2i32(<2 x i32> %x, <2 x i32> %y) {
define <4 x i64> @interleave_v2i64(<2 x i64> %x, <2 x i64> %y) {
; V128-LABEL: interleave_v2i64:
; V128: # %bb.0:
-; V128-NEXT: vmv1r.v v12, v9
; V128-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
+; V128-NEXT: vmv1r.v v12, v9
; V128-NEXT: vid.v v9
; V128-NEXT: vmv.v.i v0, 10
; V128-NEXT: vsrl.vi v14, v9, 1
More information about the llvm-commits
mailing list