[llvm] [RISCV] Introduce VLOptimizer pass (PR #108640)
Michael Maitland via llvm-commits
llvm-commits at lists.llvm.org
Fri Sep 13 13:06:17 PDT 2024
https://github.com/michaelmaitland created https://github.com/llvm/llvm-project/pull/108640
The purpose of this optimization is to make the VL argument, for instructions that have a VL argument, as small as possible. This is implemented by visiting each instruction in reverse order and checking that if it has a VL argument, whether the VL can be reduced.
By putting this pass before VSETVLI insertion, we see three kinds of changes to generated code:
1. Eliminate VSETVLI instructions
2. Reduce the VL toggle on VSETVLI instructions that also change vtype
3. Reduce the VL set by a VSETVLI instruction
The list of supported instructions is currently whitelisted for safety. In the future, we could add more instructions to `isSupportedInstr` to support even more VL optimization.
We originally wrote this pass because vector GEP instructions do not take a VL, which leads us to emit code that uses VL=VLMAX to implement GEP in the RISC-V backend. As a result, some of the vector instructions will write to lanes, specifically between the intended VL and VLMAX, that will never be read. As an alternative to this pass, we considered adding a vector predicated GEP instruction, but this would not fit well into the intrinsic type system since GEP has a variable number of arguments, each with arbitrary types. The second approach we considered was to put this pass after VSETVLI insertion, but we found that it was more difficult to recognize optimization opportunities, especially across basic block boundaries -- the data flow analysis was also a bit more expensive and complex.
While this pass solves the GEP problem, we have expanded it to handle more cases of VL optimization, and there is opportunity for the analysis to be improved to enable even more optimization.
In our downstream compiler, this was able to optimize > 5000 VLs on spec2006 and > 6000 VLs on spec2017. These measurements were using `force-tail-folding-style=data-with-evl` during vectorization. I don't have numbers for the upstream compiler.
>From bddad032ec3b1b09cf8d3e672df6c29a4785428f Mon Sep 17 00:00:00 2001
From: Michael Maitland <michaeltmaitland at gmail.com>
Date: Tue, 27 Jun 2023 09:19:36 -0700
Subject: [PATCH] [RISCV] Add VLOptimizer pass
The purpose of this optimization is to make the VL argument, for instructions
that have a VL argument, as small as possible. This is implemented by
visiting each instruction in reverse order and checking that if it has a VL
argument, whether the VL can be reduced.
This is done before vsetvli insertion to reduce the number of generated
vsetvlis. It can also reduce the number of vsetvli instructions that toggle
the VL (the vtype may still need to get set).
The list of supported instructions is currently whitelisted for
safety. In the future, we could add more instructions to isSupportedInstr
to support even more VL optimization.
Co-authored-by: Craig Topper <craig.topper at sifive.com>
Co-authored-by: Kito Cheng <kito.cheng at sifive.com>
---
llvm/lib/Target/RISCV/CMakeLists.txt | 1 +
llvm/lib/Target/RISCV/RISCV.h | 4 +
llvm/lib/Target/RISCV/RISCVTargetMachine.cpp | 9 +-
llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp | 1468 +++++++++++++++++
llvm/test/CodeGen/RISCV/O3-pipeline.ll | 3 +-
.../CodeGen/RISCV/rvv/narrow-shift-extend.ll | 54 +-
llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll | 54 +-
llvm/test/CodeGen/RISCV/rvv/vdiv-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vdivu-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll | 42 +-
llvm/test/CodeGen/RISCV/rvv/vfwmsac-vp.ll | 36 +-
llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll | 45 +-
llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll | 45 +-
llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vmul-vp.ll | 7 +-
.../test/CodeGen/RISCV/rvv/vpgather-sdnode.ll | 383 ++---
.../CodeGen/RISCV/rvv/vpscatter-sdnode.ll | 365 ++--
llvm/test/CodeGen/RISCV/rvv/vrem-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vremu-vp.ll | 2 -
.../RISCV/rvv/vsetvli-insert-crossbb.ll | 3 -
llvm/test/CodeGen/RISCV/rvv/vshl-vp.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vsra-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vsrl-vp.ll | 2 -
llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/vwsll-vp.ll | 90 +-
29 files changed, 1960 insertions(+), 678 deletions(-)
create mode 100644 llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp
diff --git a/llvm/lib/Target/RISCV/CMakeLists.txt b/llvm/lib/Target/RISCV/CMakeLists.txt
index 124bc239451aed..46ef94b4dd2931 100644
--- a/llvm/lib/Target/RISCV/CMakeLists.txt
+++ b/llvm/lib/Target/RISCV/CMakeLists.txt
@@ -59,6 +59,7 @@ add_llvm_target(RISCVCodeGen
RISCVTargetObjectFile.cpp
RISCVTargetTransformInfo.cpp
RISCVVectorPeephole.cpp
+ RISCVVLOptimizer.cpp
GISel/RISCVCallLowering.cpp
GISel/RISCVInstructionSelector.cpp
GISel/RISCVLegalizerInfo.cpp
diff --git a/llvm/lib/Target/RISCV/RISCV.h b/llvm/lib/Target/RISCV/RISCV.h
index 5a94ada8f8dd46..651119d10d1119 100644
--- a/llvm/lib/Target/RISCV/RISCV.h
+++ b/llvm/lib/Target/RISCV/RISCV.h
@@ -99,6 +99,10 @@ void initializeRISCVO0PreLegalizerCombinerPass(PassRegistry &);
FunctionPass *createRISCVPreLegalizerCombiner();
void initializeRISCVPreLegalizerCombinerPass(PassRegistry &);
+
+FunctionPass *createRISCVVLOptimizerPass();
+void initializeRISCVVLOptimizerPass(PassRegistry &);
+
} // namespace llvm
#endif
diff --git a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
index 794df2212dfa53..bfedc6b177332e 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
@@ -103,6 +103,10 @@ static cl::opt<bool> EnableVSETVLIAfterRVVRegAlloc(
cl::desc("Insert vsetvls after vector register allocation"),
cl::init(true));
+static cl::opt<bool> EnableVLOptimizer("riscv-enable-vloptimizer",
+ cl::desc("Enable the VL Optimizer pass"),
+ cl::init(true), cl::Hidden);
+
extern "C" LLVM_EXTERNAL_VISIBILITY void LLVMInitializeRISCVTarget() {
RegisterTargetMachine<RISCVTargetMachine> X(getTheRISCV32Target());
RegisterTargetMachine<RISCVTargetMachine> Y(getTheRISCV64Target());
@@ -550,8 +554,11 @@ void RISCVPassConfig::addMachineSSAOptimization() {
void RISCVPassConfig::addPreRegAlloc() {
addPass(createRISCVPreRAExpandPseudoPass());
- if (TM->getOptLevel() != CodeGenOptLevel::None)
+ if (TM->getOptLevel() != CodeGenOptLevel::None) {
addPass(createRISCVMergeBaseOffsetOptPass());
+ if (EnableVLOptimizer)
+ addPass(createRISCVVLOptimizerPass());
+ }
addPass(createRISCVInsertReadWriteCSRPass());
addPass(createRISCVInsertWriteVXRMPass());
diff --git a/llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp b/llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp
new file mode 100644
index 00000000000000..d25db9b38bec44
--- /dev/null
+++ b/llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp
@@ -0,0 +1,1468 @@
+//===-------------- RISCVVLOptimizer.cpp - VL Optimizer ------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===---------------------------------------------------------------------===//
+//
+// This pass reduces the VL where possible at the MI level, before VSETVLI
+// instructions are inserted.
+//
+// The purpose of this optimization is to make the VL argument, for instructions
+// that have a VL argument, as small as possible. This is implemented by
+// visiting each instruction in reverse order and checking that if it has a VL
+// argument, whether the VL can be reduced.
+//
+//===---------------------------------------------------------------------===//
+
+#include "RISCV.h"
+#include "RISCVMachineFunctionInfo.h"
+#include "RISCVSubtarget.h"
+#include "llvm/ADT/SetVector.h"
+#include "llvm/CodeGen/MachineDominators.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/InitializePasses.h"
+
+#include <algorithm>
+
+using namespace llvm;
+
+#define DEBUG_TYPE "riscv-vl-optimizer"
+
+namespace {
+
+class RISCVVLOptimizer : public MachineFunctionPass {
+ const MachineRegisterInfo *MRI;
+ const MachineDominatorTree *MDT;
+
+public:
+ static char ID;
+
+ RISCVVLOptimizer() : MachineFunctionPass(ID) {
+ initializeRISCVVLOptimizerPass(*PassRegistry::getPassRegistry());
+ }
+
+ bool runOnMachineFunction(MachineFunction &MF) override;
+
+ void getAnalysisUsage(AnalysisUsage &AU) const override {
+ AU.setPreservesCFG();
+ AU.addRequired<MachineDominatorTreeWrapperPass>();
+ MachineFunctionPass::getAnalysisUsage(AU);
+ }
+
+ StringRef getPassName() const override { return "RISC-V VL Optimizer"; }
+
+private:
+ bool tryReduceVL(MachineInstr &MI);
+};
+
+} // end anonymous namespace
+
+char RISCVVLOptimizer::ID = 0;
+INITIALIZE_PASS_BEGIN(RISCVVLOptimizer, DEBUG_TYPE, "RISC-V VL Optimizer",
+ false, false)
+INITIALIZE_PASS_DEPENDENCY(MachineDominatorTreeWrapperPass)
+INITIALIZE_PASS_END(RISCVVLOptimizer, DEBUG_TYPE, "RISC-V VL Optimizer", false,
+ false)
+
+FunctionPass *llvm::createRISCVVLOptimizerPass() {
+ return new RISCVVLOptimizer();
+}
+
+/// Return true if R is a physical or virtual vector register, false otherwise.
+static bool isVectorRegClass(Register R, const MachineRegisterInfo *MRI) {
+ if (R.isPhysical())
+ return RISCV::VRRegClass.contains(R);
+ const TargetRegisterClass *RC = MRI->getRegClass(R);
+ return RISCV::VRRegClass.hasSubClassEq(RC) ||
+ RISCV::VRM2RegClass.hasSubClassEq(RC) ||
+ RISCV::VRM4RegClass.hasSubClassEq(RC) ||
+ RISCV::VRM8RegClass.hasSubClassEq(RC);
+}
+
+/// Represents the EMUL and EEW of a MachineOperand.
+struct OperandInfo {
+ enum class State {
+ Unknown,
+ Known,
+ } S;
+
+ // Represent as 1,2,4,8, ... and fractional indicator. This is because
+ // EMUL can take on values that don't map to RISCVII::VLMUL values exactly.
+ // For example, a mask operand can have an EMUL less than MF8.
+ std::pair<unsigned, bool> EMUL;
+
+ unsigned Log2EEW;
+
+ OperandInfo(RISCVII::VLMUL EMUL, unsigned Log2EEW)
+ : S(State::Known), EMUL(RISCVVType::decodeVLMUL(EMUL)), Log2EEW(Log2EEW) {
+ }
+
+ OperandInfo(std::pair<unsigned, bool> EMUL, unsigned Log2EEW)
+ : S(State::Known), EMUL(EMUL), Log2EEW(Log2EEW) {}
+
+ OperandInfo(State S) : S(S) {
+ assert(S != State::Known &&
+ "This constructor may only be used to construct "
+ "an Unknown OperandInfo");
+ }
+
+ bool isUnknown() { return S == State::Unknown; }
+ bool isKnown() { return S == State::Known; }
+
+ static bool EMULAndEEWAreEqual(OperandInfo A, OperandInfo B) {
+ assert(A.isKnown() && B.isKnown() && "Both operands must be known");
+ return A.Log2EEW == B.Log2EEW && A.EMUL.first == B.EMUL.first &&
+ A.EMUL.second == B.EMUL.second;
+ }
+};
+
+/// Return the RISCVII::VLMUL that is two times VLMul.
+/// Precondition: VLMul is not LMUL_RESERVED or LMUL_8.
+static RISCVII::VLMUL twoTimesVLMUL(RISCVII::VLMUL VLMul) {
+ switch (VLMul) {
+ case RISCVII::VLMUL::LMUL_F8:
+ return RISCVII::VLMUL::LMUL_F4;
+ case RISCVII::VLMUL::LMUL_F4:
+ return RISCVII::VLMUL::LMUL_F2;
+ case RISCVII::VLMUL::LMUL_F2:
+ return RISCVII::VLMUL::LMUL_1;
+ case RISCVII::VLMUL::LMUL_1:
+ return RISCVII::VLMUL::LMUL_2;
+ case RISCVII::VLMUL::LMUL_2:
+ return RISCVII::VLMUL::LMUL_4;
+ case RISCVII::VLMUL::LMUL_4:
+ return RISCVII::VLMUL::LMUL_8;
+ case RISCVII::VLMUL::LMUL_8:
+ default:
+ llvm_unreachable("Could not multiply VLMul by 2");
+ }
+}
+
+/// Return the RISCVII::VLMUL that is VLMul / 2.
+/// Precondition: VLMul is not LMUL_RESERVED or LMUL_MF8.
+static RISCVII::VLMUL halfVLMUL(RISCVII::VLMUL VLMul) {
+ switch (VLMul) {
+ case RISCVII::VLMUL::LMUL_F4:
+ return RISCVII::VLMUL::LMUL_F8;
+ case RISCVII::VLMUL::LMUL_F2:
+ return RISCVII::VLMUL::LMUL_F4;
+ case RISCVII::VLMUL::LMUL_1:
+ return RISCVII::VLMUL::LMUL_F2;
+ case RISCVII::VLMUL::LMUL_2:
+ return RISCVII::VLMUL::LMUL_1;
+ case RISCVII::VLMUL::LMUL_4:
+ return RISCVII::VLMUL::LMUL_2;
+ case RISCVII::VLMUL::LMUL_8:
+ return RISCVII::VLMUL::LMUL_4;
+ case RISCVII::VLMUL::LMUL_F8:
+ default:
+ llvm_unreachable("Could not divide VLMul by 2");
+ }
+}
+
+/// Return EMUL = (EEW / SEW) * LMUL where EEW comes from Log2EEW and LMUL and
+/// SEW are from the TSFlags of MI.
+static std::pair<unsigned, bool>
+getEMULEqualsEEWDivSEWTimesLMUL(unsigned Log2EEW, const MachineInstr &MI) {
+ RISCVII::VLMUL MIVLMUL = RISCVII::getLMul(MI.getDesc().TSFlags);
+ auto [MILMUL, MILMULIsFractional] = RISCVVType::decodeVLMUL(MIVLMUL);
+ unsigned MILog2SEW =
+ MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+ unsigned MISEW = 1 << MILog2SEW;
+
+ unsigned EEW = 1 << Log2EEW;
+ // Calculate (EEW/SEW)*LMUL preserving fractions less than 1. Use GCD
+ // to put fraction in simplest form.
+ unsigned Num = EEW, Denom = MISEW;
+ int GCD = MILMULIsFractional ? std::gcd(Num, Denom * MILMUL)
+ : std::gcd(Num * MILMUL, Denom);
+ Num = MILMULIsFractional ? Num / GCD : Num * MILMUL / GCD;
+ Denom = MILMULIsFractional ? Denom * MILMUL / GCD : Denom / GCD;
+ return std::make_pair(Num > Denom ? Num : Denom, Denom > Num);
+}
+
+/// An index segment load or store operand has the form v.*seg<nf>ei<eeew>.v.
+/// Data has EEW=SEW, EMUL=LMUL. Index has EEW=<eew>, EMUL=(EEW/SEW)*LMUL. LMUL
+/// and SEW comes from TSFlags of MI.
+static OperandInfo
+getIndexSegmentLoadStoreOperandInfo(unsigned Log2EEW, const MachineInstr &MI,
+ const MachineOperand &MO) {
+ // Operand 0 is data register
+ // Data vector register group has EEW=SEW, EMUL=LMUL.
+ if (MO.getOperandNo() == 0) {
+ RISCVII::VLMUL MIVLMul = RISCVII::getLMul(MI.getDesc().TSFlags);
+ unsigned MILog2SEW =
+ MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+ return OperandInfo(MIVLMul, MILog2SEW);
+ }
+
+ // Operand 1 is index vector register
+ // v.*seg<nf>ei<eeew>.v
+ // Index vector register group has EEW=<eew>, EMUL=(EEW/SEW)*LMUL.
+ if (MO.getOperandNo() == 2)
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(Log2EEW, MI), Log2EEW);
+
+ llvm_unreachable("Could not get OperandInfo for non-vector register of an "
+ "indexed segment load or store instruction");
+}
+
+/// Dest has EEW=SEW and EMUL=LMUL. Source EEW=SEW/Factor (i.e. F2 => EEW/2).
+/// Source has EMUL=(EEW/SEW)*LMUL. LMUL and SEW comes from TSFlags of MI.
+static OperandInfo getIntegerExtensionOperandInfo(unsigned Factor,
+ const MachineInstr &MI,
+ const MachineOperand &MO) {
+ RISCVII::VLMUL MIVLMul = RISCVII::getLMul(MI.getDesc().TSFlags);
+ unsigned MILog2SEW =
+ MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+
+ if (MO.getOperandNo() == 0)
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ unsigned MISEW = 1 << MILog2SEW;
+ unsigned EEW = MISEW / Factor;
+ unsigned Log2EEW = Log2_32(EEW);
+
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(Log2EEW, MI), Log2EEW);
+}
+
+/// Check whether MO is a mask operand of MI.
+static bool isMaskOperand(const MachineInstr &MI, const MachineOperand &MO,
+ const MachineRegisterInfo *MRI) {
+
+ if (!MO.isReg() || !isVectorRegClass(MO.getReg(), MRI))
+ return false;
+
+ const MCInstrDesc &Desc = MI.getDesc();
+ return Desc.operands()[MO.getOperandNo()].RegClass == RISCV::VMV0RegClassID;
+}
+
+/// Return the OperandInfo for MO, which is an operand of MI.
+static OperandInfo getOperandInfo(const MachineInstr &MI,
+ const MachineOperand &MO,
+ const MachineRegisterInfo *MRI) {
+ const RISCVVPseudosTable::PseudoInfo *RVV =
+ RISCVVPseudosTable::getPseudoInfo(MI.getOpcode());
+ assert(RVV && "Could not find MI in PseudoTable");
+
+ // MI has a VLMUL and SEW associated with it. The RVV specification defines
+ // the LMUL and SEW of each operand and definition in relation to MI.VLMUL and
+ // MI.SEW.
+ RISCVII::VLMUL MIVLMul = RISCVII::getLMul(MI.getDesc().TSFlags);
+ unsigned MILog2SEW =
+ MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+ bool IsMODef = MO.getOperandNo() == 0;
+
+ // All mask operands have EEW=1, EMUL=(EEW/SEW)*LMUL
+ if (isMaskOperand(MI, MO, MRI))
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+
+ // TODO: Pseudos that end in _MASK or _TU can have a merge operand.
+ // We bail out early for instructions that have merge operands for now.
+ // Merge operand is first operand after defs.
+ if (MO.getOperandNo() == MI.getNumExplicitDefs() && MO.isReg() && MO.isTied())
+ return OperandInfo(OperandInfo::State::Unknown);
+
+ // switch against BaseInstr to reduce number of cases that need to be
+ // considered.
+ switch (RVV->BaseInstr) {
+
+ // 6. Configuration-Setting Instructions
+ // Configuration setting instructions do not read or write vector registers
+ case RISCV::VSETIVLI:
+ case RISCV::VSETVL:
+ case RISCV::VSETVLI:
+ llvm_unreachable("Configuration setting instructions do not read or write "
+ "vector registers");
+
+ // 7. Vector Loads and Stores
+ // 7.4. Vector Unit-Stride Instructions
+ // 7.5. Vector Strided Instructions
+ // 7.7. Unit-stride Fault-Only-First Loads
+ /// Dest EEW encoded in the instruction and EMUL=(EEW/SEW)*LMUL
+ case RISCV::VLE8_V:
+ case RISCV::VSE8_V:
+ case RISCV::VLM_V:
+ case RISCV::VSM_V:
+ case RISCV::VLSE8_V:
+ case RISCV::VSSE8_V:
+ case RISCV::VLE8FF_V:
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(3, MI), 3);
+ case RISCV::VLE16_V:
+ case RISCV::VSE16_V:
+ case RISCV::VLSE16_V:
+ case RISCV::VSSE16_V:
+ case RISCV::VLE16FF_V:
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(4, MI), 4);
+ case RISCV::VLE32_V:
+ case RISCV::VSE32_V:
+ case RISCV::VLSE32_V:
+ case RISCV::VSSE32_V:
+ case RISCV::VLE32FF_V:
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(5, MI), 5);
+ case RISCV::VLE64_V:
+ case RISCV::VSE64_V:
+ case RISCV::VLSE64_V:
+ case RISCV::VSSE64_V:
+ case RISCV::VLE64FF_V:
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(6, MI), 6);
+
+ // 7.6. Vector Indexed Instructions
+ // Data EEW=SEW, EMUL=LMUL. Index EEW=<eew> and EMUL=(EEW/SEW)*LMUL
+ case RISCV::VLUXEI8_V:
+ case RISCV::VLOXEI8_V:
+ case RISCV::VSUXEI8_V:
+ case RISCV::VSOXEI8_V:
+ if (MO.getOperandNo() == 0)
+ return OperandInfo(MIVLMul, MILog2SEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(3, MI), 3);
+ case RISCV::VLUXEI16_V:
+ case RISCV::VLOXEI16_V:
+ case RISCV::VSUXEI16_V:
+ case RISCV::VSOXEI16_V:
+ if (MO.getOperandNo() == 0)
+ return OperandInfo(MIVLMul, MILog2SEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(4, MI), 4);
+ case RISCV::VLUXEI32_V:
+ case RISCV::VLOXEI32_V:
+ case RISCV::VSUXEI32_V:
+ case RISCV::VSOXEI32_V:
+ if (MO.getOperandNo() == 0)
+ return OperandInfo(MIVLMul, MILog2SEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(5, MI), 5);
+ case RISCV::VLUXEI64_V:
+ case RISCV::VLOXEI64_V:
+ case RISCV::VSUXEI64_V:
+ case RISCV::VSOXEI64_V:
+ if (MO.getOperandNo() == 0)
+ return OperandInfo(MIVLMul, MILog2SEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(6, MI), 6);
+
+ // 7.8. Vector Load/Store Segment Instructions
+ // 7.8.1. Vector Unit-Stride Segment Loads and Stores
+ // v.*seg<nf>e<eew>.*
+ // EEW=eew, EMUL=LMUL
+ case RISCV::VLSEG2E8_V:
+ case RISCV::VLSEG2E8FF_V:
+ case RISCV::VLSEG3E8_V:
+ case RISCV::VLSEG3E8FF_V:
+ case RISCV::VLSEG4E8_V:
+ case RISCV::VLSEG4E8FF_V:
+ case RISCV::VLSEG5E8_V:
+ case RISCV::VLSEG5E8FF_V:
+ case RISCV::VLSEG6E8_V:
+ case RISCV::VLSEG6E8FF_V:
+ case RISCV::VLSEG7E8_V:
+ case RISCV::VLSEG7E8FF_V:
+ case RISCV::VLSEG8E8_V:
+ case RISCV::VLSEG8E8FF_V:
+ case RISCV::VSSEG2E8_V:
+ case RISCV::VSSEG3E8_V:
+ case RISCV::VSSEG4E8_V:
+ case RISCV::VSSEG5E8_V:
+ case RISCV::VSSEG6E8_V:
+ case RISCV::VSSEG7E8_V:
+ case RISCV::VSSEG8E8_V:
+ return OperandInfo(MIVLMul, 3);
+ case RISCV::VLSEG2E16_V:
+ case RISCV::VLSEG2E16FF_V:
+ case RISCV::VLSEG3E16_V:
+ case RISCV::VLSEG3E16FF_V:
+ case RISCV::VLSEG4E16_V:
+ case RISCV::VLSEG4E16FF_V:
+ case RISCV::VLSEG5E16_V:
+ case RISCV::VLSEG5E16FF_V:
+ case RISCV::VLSEG6E16_V:
+ case RISCV::VLSEG6E16FF_V:
+ case RISCV::VLSEG7E16_V:
+ case RISCV::VLSEG7E16FF_V:
+ case RISCV::VLSEG8E16_V:
+ case RISCV::VLSEG8E16FF_V:
+ case RISCV::VSSEG2E16_V:
+ case RISCV::VSSEG3E16_V:
+ case RISCV::VSSEG4E16_V:
+ case RISCV::VSSEG5E16_V:
+ case RISCV::VSSEG6E16_V:
+ case RISCV::VSSEG7E16_V:
+ case RISCV::VSSEG8E16_V:
+ return OperandInfo(MIVLMul, 4);
+ case RISCV::VLSEG2E32_V:
+ case RISCV::VLSEG2E32FF_V:
+ case RISCV::VLSEG3E32_V:
+ case RISCV::VLSEG3E32FF_V:
+ case RISCV::VLSEG4E32_V:
+ case RISCV::VLSEG4E32FF_V:
+ case RISCV::VLSEG5E32_V:
+ case RISCV::VLSEG5E32FF_V:
+ case RISCV::VLSEG6E32_V:
+ case RISCV::VLSEG6E32FF_V:
+ case RISCV::VLSEG7E32_V:
+ case RISCV::VLSEG7E32FF_V:
+ case RISCV::VLSEG8E32_V:
+ case RISCV::VLSEG8E32FF_V:
+ case RISCV::VSSEG2E32_V:
+ case RISCV::VSSEG3E32_V:
+ case RISCV::VSSEG4E32_V:
+ case RISCV::VSSEG5E32_V:
+ case RISCV::VSSEG6E32_V:
+ case RISCV::VSSEG7E32_V:
+ case RISCV::VSSEG8E32_V:
+ return OperandInfo(MIVLMul, 5);
+ case RISCV::VLSEG2E64_V:
+ case RISCV::VLSEG2E64FF_V:
+ case RISCV::VLSEG3E64_V:
+ case RISCV::VLSEG3E64FF_V:
+ case RISCV::VLSEG4E64_V:
+ case RISCV::VLSEG4E64FF_V:
+ case RISCV::VLSEG5E64_V:
+ case RISCV::VLSEG5E64FF_V:
+ case RISCV::VLSEG6E64_V:
+ case RISCV::VLSEG6E64FF_V:
+ case RISCV::VLSEG7E64_V:
+ case RISCV::VLSEG7E64FF_V:
+ case RISCV::VLSEG8E64_V:
+ case RISCV::VLSEG8E64FF_V:
+ case RISCV::VSSEG2E64_V:
+ case RISCV::VSSEG3E64_V:
+ case RISCV::VSSEG4E64_V:
+ case RISCV::VSSEG5E64_V:
+ case RISCV::VSSEG6E64_V:
+ case RISCV::VSSEG7E64_V:
+ case RISCV::VSSEG8E64_V:
+ return OperandInfo(MIVLMul, 6);
+
+ // 7.8.2. Vector Strided Segment Loads and Stores
+ case RISCV::VLSSEG2E8_V:
+ case RISCV::VLSSEG3E8_V:
+ case RISCV::VLSSEG4E8_V:
+ case RISCV::VLSSEG5E8_V:
+ case RISCV::VLSSEG6E8_V:
+ case RISCV::VLSSEG7E8_V:
+ case RISCV::VLSSEG8E8_V:
+ case RISCV::VSSSEG2E8_V:
+ case RISCV::VSSSEG3E8_V:
+ case RISCV::VSSSEG4E8_V:
+ case RISCV::VSSSEG5E8_V:
+ case RISCV::VSSSEG6E8_V:
+ case RISCV::VSSSEG7E8_V:
+ case RISCV::VSSSEG8E8_V:
+ return OperandInfo(MIVLMul, 3);
+ case RISCV::VLSSEG2E16_V:
+ case RISCV::VLSSEG3E16_V:
+ case RISCV::VLSSEG4E16_V:
+ case RISCV::VLSSEG5E16_V:
+ case RISCV::VLSSEG6E16_V:
+ case RISCV::VLSSEG7E16_V:
+ case RISCV::VLSSEG8E16_V:
+ case RISCV::VSSSEG2E16_V:
+ case RISCV::VSSSEG3E16_V:
+ case RISCV::VSSSEG4E16_V:
+ case RISCV::VSSSEG5E16_V:
+ case RISCV::VSSSEG6E16_V:
+ case RISCV::VSSSEG7E16_V:
+ case RISCV::VSSSEG8E16_V:
+ return OperandInfo(MIVLMul, 4);
+ case RISCV::VLSSEG2E32_V:
+ case RISCV::VLSSEG3E32_V:
+ case RISCV::VLSSEG4E32_V:
+ case RISCV::VLSSEG5E32_V:
+ case RISCV::VLSSEG6E32_V:
+ case RISCV::VLSSEG7E32_V:
+ case RISCV::VLSSEG8E32_V:
+ case RISCV::VSSSEG2E32_V:
+ case RISCV::VSSSEG3E32_V:
+ case RISCV::VSSSEG4E32_V:
+ case RISCV::VSSSEG5E32_V:
+ case RISCV::VSSSEG6E32_V:
+ case RISCV::VSSSEG7E32_V:
+ case RISCV::VSSSEG8E32_V:
+ return OperandInfo(MIVLMul, 5);
+ case RISCV::VLSSEG2E64_V:
+ case RISCV::VLSSEG3E64_V:
+ case RISCV::VLSSEG4E64_V:
+ case RISCV::VLSSEG5E64_V:
+ case RISCV::VLSSEG6E64_V:
+ case RISCV::VLSSEG7E64_V:
+ case RISCV::VLSSEG8E64_V:
+ case RISCV::VSSSEG2E64_V:
+ case RISCV::VSSSEG3E64_V:
+ case RISCV::VSSSEG4E64_V:
+ case RISCV::VSSSEG5E64_V:
+ case RISCV::VSSSEG6E64_V:
+ case RISCV::VSSSEG7E64_V:
+ case RISCV::VSSSEG8E64_V:
+ return OperandInfo(MIVLMul, 6);
+
+ // 7.8.3. Vector Indexed Segment Loads and Stores
+ case RISCV::VLUXSEG2EI8_V:
+ case RISCV::VLUXSEG3EI8_V:
+ case RISCV::VLUXSEG4EI8_V:
+ case RISCV::VLUXSEG5EI8_V:
+ case RISCV::VLUXSEG6EI8_V:
+ case RISCV::VLUXSEG7EI8_V:
+ case RISCV::VLUXSEG8EI8_V:
+ case RISCV::VLOXSEG2EI8_V:
+ case RISCV::VLOXSEG3EI8_V:
+ case RISCV::VLOXSEG4EI8_V:
+ case RISCV::VLOXSEG5EI8_V:
+ case RISCV::VLOXSEG6EI8_V:
+ case RISCV::VLOXSEG7EI8_V:
+ case RISCV::VLOXSEG8EI8_V:
+ case RISCV::VSUXSEG2EI8_V:
+ case RISCV::VSUXSEG3EI8_V:
+ case RISCV::VSUXSEG4EI8_V:
+ case RISCV::VSUXSEG5EI8_V:
+ case RISCV::VSUXSEG6EI8_V:
+ case RISCV::VSUXSEG7EI8_V:
+ case RISCV::VSUXSEG8EI8_V:
+ case RISCV::VSOXSEG2EI8_V:
+ case RISCV::VSOXSEG3EI8_V:
+ case RISCV::VSOXSEG4EI8_V:
+ case RISCV::VSOXSEG5EI8_V:
+ case RISCV::VSOXSEG6EI8_V:
+ case RISCV::VSOXSEG7EI8_V:
+ case RISCV::VSOXSEG8EI8_V:
+ return getIndexSegmentLoadStoreOperandInfo(3, MI, MO);
+ case RISCV::VLUXSEG2EI16_V:
+ case RISCV::VLUXSEG3EI16_V:
+ case RISCV::VLUXSEG4EI16_V:
+ case RISCV::VLUXSEG5EI16_V:
+ case RISCV::VLUXSEG6EI16_V:
+ case RISCV::VLUXSEG7EI16_V:
+ case RISCV::VLUXSEG8EI16_V:
+ case RISCV::VLOXSEG2EI16_V:
+ case RISCV::VLOXSEG3EI16_V:
+ case RISCV::VLOXSEG4EI16_V:
+ case RISCV::VLOXSEG5EI16_V:
+ case RISCV::VLOXSEG6EI16_V:
+ case RISCV::VLOXSEG7EI16_V:
+ case RISCV::VLOXSEG8EI16_V:
+ case RISCV::VSUXSEG2EI16_V:
+ case RISCV::VSUXSEG3EI16_V:
+ case RISCV::VSUXSEG4EI16_V:
+ case RISCV::VSUXSEG5EI16_V:
+ case RISCV::VSUXSEG6EI16_V:
+ case RISCV::VSUXSEG7EI16_V:
+ case RISCV::VSUXSEG8EI16_V:
+ case RISCV::VSOXSEG2EI16_V:
+ case RISCV::VSOXSEG3EI16_V:
+ case RISCV::VSOXSEG4EI16_V:
+ case RISCV::VSOXSEG5EI16_V:
+ case RISCV::VSOXSEG6EI16_V:
+ case RISCV::VSOXSEG7EI16_V:
+ case RISCV::VSOXSEG8EI16_V:
+ return getIndexSegmentLoadStoreOperandInfo(4, MI, MO);
+ case RISCV::VLUXSEG2EI32_V:
+ case RISCV::VLUXSEG3EI32_V:
+ case RISCV::VLUXSEG4EI32_V:
+ case RISCV::VLUXSEG5EI32_V:
+ case RISCV::VLUXSEG6EI32_V:
+ case RISCV::VLUXSEG7EI32_V:
+ case RISCV::VLUXSEG8EI32_V:
+ case RISCV::VLOXSEG2EI32_V:
+ case RISCV::VLOXSEG3EI32_V:
+ case RISCV::VLOXSEG4EI32_V:
+ case RISCV::VLOXSEG5EI32_V:
+ case RISCV::VLOXSEG6EI32_V:
+ case RISCV::VLOXSEG7EI32_V:
+ case RISCV::VLOXSEG8EI32_V:
+ case RISCV::VSUXSEG2EI32_V:
+ case RISCV::VSUXSEG3EI32_V:
+ case RISCV::VSUXSEG4EI32_V:
+ case RISCV::VSUXSEG5EI32_V:
+ case RISCV::VSUXSEG6EI32_V:
+ case RISCV::VSUXSEG7EI32_V:
+ case RISCV::VSUXSEG8EI32_V:
+ case RISCV::VSOXSEG2EI32_V:
+ case RISCV::VSOXSEG3EI32_V:
+ case RISCV::VSOXSEG4EI32_V:
+ case RISCV::VSOXSEG5EI32_V:
+ case RISCV::VSOXSEG6EI32_V:
+ case RISCV::VSOXSEG7EI32_V:
+ case RISCV::VSOXSEG8EI32_V:
+ return getIndexSegmentLoadStoreOperandInfo(5, MI, MO);
+ case RISCV::VLUXSEG2EI64_V:
+ case RISCV::VLUXSEG3EI64_V:
+ case RISCV::VLUXSEG4EI64_V:
+ case RISCV::VLUXSEG5EI64_V:
+ case RISCV::VLUXSEG6EI64_V:
+ case RISCV::VLUXSEG7EI64_V:
+ case RISCV::VLUXSEG8EI64_V:
+ case RISCV::VLOXSEG2EI64_V:
+ case RISCV::VLOXSEG3EI64_V:
+ case RISCV::VLOXSEG4EI64_V:
+ case RISCV::VLOXSEG5EI64_V:
+ case RISCV::VLOXSEG6EI64_V:
+ case RISCV::VLOXSEG7EI64_V:
+ case RISCV::VLOXSEG8EI64_V:
+ case RISCV::VSUXSEG2EI64_V:
+ case RISCV::VSUXSEG3EI64_V:
+ case RISCV::VSUXSEG4EI64_V:
+ case RISCV::VSUXSEG5EI64_V:
+ case RISCV::VSUXSEG6EI64_V:
+ case RISCV::VSUXSEG7EI64_V:
+ case RISCV::VSUXSEG8EI64_V:
+ case RISCV::VSOXSEG2EI64_V:
+ case RISCV::VSOXSEG3EI64_V:
+ case RISCV::VSOXSEG4EI64_V:
+ case RISCV::VSOXSEG5EI64_V:
+ case RISCV::VSOXSEG6EI64_V:
+ case RISCV::VSOXSEG7EI64_V:
+ case RISCV::VSOXSEG8EI64_V:
+ return getIndexSegmentLoadStoreOperandInfo(6, MI, MO);
+
+ // 7.9. Vector Load/Store Whole Register Instructions
+ // EMUL=nr. EEW=eew. Since in-register byte layouts are idential to in-memory
+ // byte layouts, the same data is writen to destination register regardless
+ // of EEW. eew is just a hint to the hardware and has not functional impact.
+ // Therefore, it is be okay if we ignore eew and always use the same EEW to
+ // create more optimization opportunities.
+ // FIXME: Instead of using any SEW, we really ought to return the SEW in the
+ // instruction and add a field to OperandInfo that says the SEW is just a hint
+ // so that this optimization can use any sew to construct a ratio.
+ case RISCV::VL1RE8_V:
+ case RISCV::VL1RE16_V:
+ case RISCV::VL1RE32_V:
+ case RISCV::VL1RE64_V:
+ case RISCV::VS1R_V:
+ return OperandInfo(RISCVII::VLMUL::LMUL_1, 0);
+ case RISCV::VL2RE8_V:
+ case RISCV::VL2RE16_V:
+ case RISCV::VL2RE32_V:
+ case RISCV::VL2RE64_V:
+ case RISCV::VS2R_V:
+ return OperandInfo(RISCVII::VLMUL::LMUL_2, 0);
+ case RISCV::VL4RE8_V:
+ case RISCV::VL4RE16_V:
+ case RISCV::VL4RE32_V:
+ case RISCV::VL4RE64_V:
+ case RISCV::VS4R_V:
+ return OperandInfo(RISCVII::VLMUL::LMUL_4, 0);
+ case RISCV::VL8RE8_V:
+ case RISCV::VL8RE16_V:
+ case RISCV::VL8RE32_V:
+ case RISCV::VL8RE64_V:
+ case RISCV::VS8R_V:
+ return OperandInfo(RISCVII::VLMUL::LMUL_8, 0);
+
+ // 11. Vector Integer Arithmetic Instructions
+ // 11.1. Vector Single-Width Integer Add and Subtract
+ case RISCV::VADD_VI:
+ case RISCV::VADD_VV:
+ case RISCV::VADD_VX:
+ case RISCV::VSUB_VV:
+ case RISCV::VSUB_VX:
+ case RISCV::VRSUB_VI:
+ case RISCV::VRSUB_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.2. Vector Widening Integer Add/Subtract
+ // Def uses EEW=2*SEW and EMUL=2*LMUL. Operands use EEW=SEW and EMUL=LMUL.
+ case RISCV::VWADDU_VV:
+ case RISCV::VWADDU_VX:
+ case RISCV::VWSUBU_VV:
+ case RISCV::VWSUBU_VX:
+ case RISCV::VWADD_VV:
+ case RISCV::VWADD_VX:
+ case RISCV::VWSUB_VV:
+ case RISCV::VWSUB_VX: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // Def and Op1 uses EEW=2*SEW and EMUL=2*LMUL. Op2 uses EEW=SEW and EMUL=LMUL
+ case RISCV::VWADDU_WV:
+ case RISCV::VWADDU_WX:
+ case RISCV::VWSUBU_WV:
+ case RISCV::VWSUBU_WX:
+ case RISCV::VWADD_WV:
+ case RISCV::VWADD_WX:
+ case RISCV::VWSUB_WV:
+ case RISCV::VWSUB_WX: {
+ bool TwoTimes = IsMODef || MO.getOperandNo() == 1;
+ unsigned Log2EEW = TwoTimes ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = TwoTimes ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 11.3. Vector Integer Extension
+ case RISCV::VZEXT_VF2:
+ case RISCV::VSEXT_VF2:
+ return getIntegerExtensionOperandInfo(2, MI, MO);
+ case RISCV::VZEXT_VF4:
+ case RISCV::VSEXT_VF4:
+ return getIntegerExtensionOperandInfo(4, MI, MO);
+ case RISCV::VZEXT_VF8:
+ case RISCV::VSEXT_VF8:
+ return getIntegerExtensionOperandInfo(8, MI, MO);
+
+ // 11.4. Vector Integer Add-with-Carry / Subtract-with-Borrow Instructions
+ // EEW=SEW and EMUL=LMUL. Mask Operand EEW=1 and EMUL=(EEW/SEW)*LMUL
+ case RISCV::VADC_VIM:
+ case RISCV::VADC_VVM:
+ case RISCV::VADC_VXM:
+ case RISCV::VSBC_VVM:
+ case RISCV::VSBC_VXM:
+ return MO.getOperandNo() == 3
+ ? OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0)
+ : OperandInfo(MIVLMul, MILog2SEW);
+ // Dest EEW=1 and EMUL=(EEW/SEW)*LMUL. Source EEW=SEW and EMUL=LMUL. Mask
+ // operand EEW=1 and EMUL=(EEW/SEW)*LMUL
+ case RISCV::VMADC_VIM:
+ case RISCV::VMADC_VVM:
+ case RISCV::VMADC_VXM:
+ case RISCV::VMSBC_VVM:
+ case RISCV::VMSBC_VXM:
+ return IsMODef || MO.getOperandNo() == 3
+ ? OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0)
+ : OperandInfo(MIVLMul, MILog2SEW);
+ // Dest EEW=1 and EMUL=(EEW/SEW)*LMUL. Source EEW=SEW and EMUL=LMUL.
+ case RISCV::VMADC_VV:
+ case RISCV::VMADC_VI:
+ case RISCV::VMADC_VX:
+ case RISCV::VMSBC_VV:
+ case RISCV::VMSBC_VX:
+ return IsMODef ? OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0)
+ : OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.5. Vector Bitwise Logical Instructions
+ // 11.6. Vector Single-Width Shift Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VAND_VI:
+ case RISCV::VAND_VV:
+ case RISCV::VAND_VX:
+ case RISCV::VOR_VI:
+ case RISCV::VOR_VV:
+ case RISCV::VOR_VX:
+ case RISCV::VXOR_VI:
+ case RISCV::VXOR_VV:
+ case RISCV::VXOR_VX:
+ case RISCV::VSLL_VI:
+ case RISCV::VSLL_VV:
+ case RISCV::VSLL_VX:
+ case RISCV::VSRL_VI:
+ case RISCV::VSRL_VV:
+ case RISCV::VSRL_VX:
+ case RISCV::VSRA_VI:
+ case RISCV::VSRA_VV:
+ case RISCV::VSRA_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.7. Vector Narrowing Integer Right Shift Instructions
+ // Destination EEW=SEW and EMUL=LMUL, Op 1 has EEW=2*SEW EMUL=2*LMUL. Op2 has
+ // EEW=SEW EMUL=LMUL.
+ case RISCV::VNSRL_WX:
+ case RISCV::VNSRL_WI:
+ case RISCV::VNSRL_WV:
+ case RISCV::VNSRA_WI:
+ case RISCV::VNSRA_WV:
+ case RISCV::VNSRA_WX: {
+ bool TwoTimes = MO.getOperandNo() == 1;
+ unsigned Log2EEW = TwoTimes ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = TwoTimes ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 11.8. Vector Integer Compare Instructions
+ // Dest EEW=1 and EMUL=(EEW/SEW)*LMUL. Source EEW=SEW and EMUL=LMUL.
+ case RISCV::VMSEQ_VI:
+ case RISCV::VMSEQ_VV:
+ case RISCV::VMSEQ_VX:
+ case RISCV::VMSNE_VI:
+ case RISCV::VMSNE_VV:
+ case RISCV::VMSNE_VX:
+ case RISCV::VMSLTU_VV:
+ case RISCV::VMSLTU_VX:
+ case RISCV::VMSLT_VV:
+ case RISCV::VMSLT_VX:
+ case RISCV::VMSLEU_VV:
+ case RISCV::VMSLEU_VI:
+ case RISCV::VMSLEU_VX:
+ case RISCV::VMSLE_VV:
+ case RISCV::VMSLE_VI:
+ case RISCV::VMSLE_VX:
+ case RISCV::VMSGTU_VI:
+ case RISCV::VMSGTU_VX:
+ case RISCV::VMSGT_VI:
+ case RISCV::VMSGT_VX:
+ if (IsMODef)
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.9. Vector Integer Min/Max Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VMINU_VV:
+ case RISCV::VMINU_VX:
+ case RISCV::VMIN_VV:
+ case RISCV::VMIN_VX:
+ case RISCV::VMAXU_VV:
+ case RISCV::VMAXU_VX:
+ case RISCV::VMAX_VV:
+ case RISCV::VMAX_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.10. Vector Single-Width Integer Multiply Instructions
+ // Source and Dest EEW=SEW and EMUL=LMUL.
+ case RISCV::VMUL_VV:
+ case RISCV::VMUL_VX:
+ case RISCV::VMULH_VV:
+ case RISCV::VMULH_VX:
+ case RISCV::VMULHU_VV:
+ case RISCV::VMULHU_VX:
+ case RISCV::VMULHSU_VV:
+ case RISCV::VMULHSU_VX: {
+ return OperandInfo(MIVLMul, MILog2SEW);
+ }
+ // 11.11. Vector Integer Divide Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VDIVU_VV:
+ case RISCV::VDIVU_VX:
+ case RISCV::VDIV_VV:
+ case RISCV::VDIV_VX:
+ case RISCV::VREMU_VV:
+ case RISCV::VREMU_VX:
+ case RISCV::VREM_VV:
+ case RISCV::VREM_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.12. Vector Widening Integer Multiply Instructions
+ // Source and Destination EMUL=LMUL. Destination EEW=2*SEW. Source EEW=SEW.
+ case RISCV::VWMUL_VV:
+ case RISCV::VWMUL_VX:
+ case RISCV::VWMULSU_VV:
+ case RISCV::VWMULSU_VX:
+ case RISCV::VWMULU_VV:
+ case RISCV::VWMULU_VX: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 11.13. Vector Single-Width Integer Multiply-Add Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VMACC_VV:
+ case RISCV::VMACC_VX:
+ case RISCV::VNMSAC_VV:
+ case RISCV::VNMSAC_VX:
+ case RISCV::VMADD_VV:
+ case RISCV::VMADD_VX:
+ case RISCV::VNMSUB_VV:
+ case RISCV::VNMSUB_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.14. Vector Widening Integer Multiply-Add Instructions
+ // Destination EEW=2*SEW and EMUL=2*LMUL. Source EEW=SEW and EMUL=LMUL.
+ // Even though the add is a 2*SEW addition, the operands of the add are the
+ // Dest which is 2*SEW and the result of the multiply which is 2*SEW.
+ case RISCV::VWMACCU_VV:
+ case RISCV::VWMACCU_VX:
+ case RISCV::VWMACC_VV:
+ case RISCV::VWMACC_VX:
+ case RISCV::VWMACCSU_VV:
+ case RISCV::VWMACCSU_VX:
+ case RISCV::VWMACCUS_VX: {
+ // Operand 0 is destination as a def and Operand 1 is destination as a use
+ // due to SSA.
+ bool IsMODest = MO.getOperandNo() == 0 || MO.getOperandNo() == 1;
+ unsigned Log2EEW = IsMODest ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODest ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 11.15. Vector Integer Merge Instructions
+ // EEW=SEW and EMUL=LMUL, except the mask operand has EEW=1 and EMUL=
+ // (EEW/SEW)*LMUL.
+ case RISCV::VMERGE_VIM:
+ case RISCV::VMERGE_VVM:
+ case RISCV::VMERGE_VXM:
+ if (MO.getOperandNo() == 3)
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 11.16. Vector Integer Move Instructions
+ // 12. Vector Fixed-Point Arithmetic Instructions
+ // 12.1. Vector Single-Width Saturating Add and Subtract
+ // 12.2. Vector Single-Width Averaging Add and Subtract
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VMV_V_I:
+ case RISCV::VMV_V_V:
+ case RISCV::VMV_V_X:
+ case RISCV::VSADDU_VI:
+ case RISCV::VSADDU_VV:
+ case RISCV::VSADDU_VX:
+ case RISCV::VSADD_VI:
+ case RISCV::VSADD_VV:
+ case RISCV::VSADD_VX:
+ case RISCV::VSSUBU_VV:
+ case RISCV::VSSUBU_VX:
+ case RISCV::VSSUB_VV:
+ case RISCV::VSSUB_VX:
+ case RISCV::VAADDU_VV:
+ case RISCV::VAADDU_VX:
+ case RISCV::VAADD_VV:
+ case RISCV::VAADD_VX:
+ case RISCV::VASUBU_VV:
+ case RISCV::VASUBU_VX:
+ case RISCV::VASUB_VV:
+ case RISCV::VASUB_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation
+ // Destination EEW=2*SEW and EMUL=2*EMUL. Source EEW=SEW and EMUL=LMUL.
+ case RISCV::VSMUL_VV:
+ case RISCV::VSMUL_VX: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 12.4. Vector Single-Width Scaling Shift Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VSSRL_VI:
+ case RISCV::VSSRL_VV:
+ case RISCV::VSSRL_VX:
+ case RISCV::VSSRA_VI:
+ case RISCV::VSSRA_VV:
+ case RISCV::VSSRA_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 12.5. Vector Narrowing Fixed-Point Clip Instructions
+ // Destination and Op1 EEW=SEW and EMUL=LMUL. Op2 EEW=2*SEW and EMUL=2*LMUL
+ case RISCV::VNCLIPU_WI:
+ case RISCV::VNCLIPU_WV:
+ case RISCV::VNCLIPU_WX:
+ case RISCV::VNCLIP_WI:
+ case RISCV::VNCLIP_WV:
+ case RISCV::VNCLIP_WX: {
+ bool TwoTimes = !IsMODef && MO.getOperandNo() == 1;
+ unsigned Log2EEW = TwoTimes ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = TwoTimes ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 13. Vector Floating-Point Instructions
+ // 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VFADD_VF:
+ case RISCV::VFADD_VV:
+ case RISCV::VFSUB_VF:
+ case RISCV::VFSUB_VV:
+ case RISCV::VFRSUB_VF:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+ // Dest EEW=2*SEW and EMUL=2*LMUL. Source EEW=SEW and EMUL=LMUL.
+ case RISCV::VFWADD_VV:
+ case RISCV::VFWADD_VF:
+ case RISCV::VFWSUB_VV:
+ case RISCV::VFWSUB_VF: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // Dest and Op1 EEW=2*SEW and EMUL=2*LMUL. Op2 EEW=SEW and EMUL=LMUL.
+ case RISCV::VFWADD_WF:
+ case RISCV::VFWADD_WV:
+ case RISCV::VFWSUB_WF:
+ case RISCV::VFWSUB_WV: {
+ bool TwoTimes = IsMODef && MO.getOperandNo() == 1;
+ unsigned Log2EEW = TwoTimes ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = TwoTimes ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VFMUL_VF:
+ case RISCV::VFMUL_VV:
+ case RISCV::VFDIV_VF:
+ case RISCV::VFDIV_VV:
+ case RISCV::VFRDIV_VF:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 13.5. Vector Widening Floating-Point Multiply
+ case RISCV::VFWMUL_VF:
+ case RISCV::VFWMUL_VV:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+ // EEW=SEW. EMUL=LMUL.
+ // TODO: FMA instructions reads 3 registers but MC layer only reads 2
+ // registers since its missing that the output operand should be part of the
+ // input operand list.
+ case RISCV::VFMACC_VF:
+ case RISCV::VFMACC_VV:
+ case RISCV::VFNMACC_VF:
+ case RISCV::VFNMACC_VV:
+ case RISCV::VFMSAC_VF:
+ case RISCV::VFMSAC_VV:
+ case RISCV::VFNMSAC_VF:
+ case RISCV::VFNMSAC_VV:
+ case RISCV::VFMADD_VF:
+ case RISCV::VFMADD_VV:
+ case RISCV::VFNMADD_VF:
+ case RISCV::VFNMADD_VV:
+ case RISCV::VFMSUB_VF:
+ case RISCV::VFMSUB_VV:
+ case RISCV::VFNMSUB_VF:
+ case RISCV::VFNMSUB_VV:
+ return OperandInfo(OperandInfo::State::Unknown);
+
+ // 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+ // Dest EEW=2*SEW and EMUL=2*LMUL. Source EEW=SEW EMUL=LMUL.
+ case RISCV::VFWMACC_VF:
+ case RISCV::VFWMACC_VV:
+ case RISCV::VFWNMACC_VF:
+ case RISCV::VFWNMACC_VV:
+ case RISCV::VFWMSAC_VF:
+ case RISCV::VFWMSAC_VV:
+ case RISCV::VFWNMSAC_VF:
+ case RISCV::VFWNMSAC_VV: {
+ // Operand 0 is destination as a def and Operand 1 is destination as a use
+ // due to SSA.
+ bool IsMODest = MO.getOperandNo() == 0 || MO.getOperandNo() == 1;
+ unsigned Log2EEW = IsMODest ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODest ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 13.8. Vector Floating-Point Square-Root Instruction
+ // 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+ // 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+ // 13.11. Vector Floating-Point MIN/MAX Instructions
+ // 13.12. Vector Floating-Point Sign-Injection Instructions
+ // 13.14. Vector Floating-Point Classify Instruction
+ // 13.16. Vector Floating-Point Move Instruction
+ // 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+ // EEW=SEW. EMUL=LMUL.
+ case RISCV::VFSQRT_V:
+ case RISCV::VFRSQRT7_V:
+ case RISCV::VFREC7_V:
+ case RISCV::VFMIN_VF:
+ case RISCV::VFMIN_VV:
+ case RISCV::VFMAX_VF:
+ case RISCV::VFMAX_VV:
+ case RISCV::VFSGNJ_VF:
+ case RISCV::VFSGNJ_VV:
+ case RISCV::VFSGNJN_VV:
+ case RISCV::VFSGNJN_VF:
+ case RISCV::VFSGNJX_VF:
+ case RISCV::VFSGNJX_VV:
+ case RISCV::VFCLASS_V:
+ case RISCV::VFMV_V_F:
+ case RISCV::VFCVT_XU_F_V:
+ case RISCV::VFCVT_X_F_V:
+ case RISCV::VFCVT_RTZ_XU_F_V:
+ case RISCV::VFCVT_RTZ_X_F_V:
+ case RISCV::VFCVT_F_XU_V:
+ case RISCV::VFCVT_F_X_V:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 13.13. Vector Floating-Point Compare Instructions
+ // Dest EEW=1 and EMUL=(EEW/SEW)*LMUL. Source EEW=SEW EMUL=LMUL.
+ case RISCV::VMFEQ_VF:
+ case RISCV::VMFEQ_VV:
+ case RISCV::VMFNE_VF:
+ case RISCV::VMFNE_VV:
+ case RISCV::VMFLT_VF:
+ case RISCV::VMFLT_VV:
+ case RISCV::VMFLE_VF:
+ case RISCV::VMFLE_VV:
+ case RISCV::VMFGT_VF:
+ case RISCV::VMFGE_VF:
+ if (IsMODef)
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 13.15. Vector Floating-Point Merge Instruction
+ // EEW=SEW and EMUL=LMUL, except the mask operand has EEW=1 and EMUL=
+ // (EEW/SEW)*LMUL.
+ case RISCV::VFMERGE_VFM:
+ if (MO.getOperandNo() == 3)
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+ // Dest EEW=2*SEW and EMUL=2*LMUL. Source EEW=SEW and EMUL=LMUL.
+ case RISCV::VFWCVT_XU_F_V:
+ case RISCV::VFWCVT_X_F_V:
+ case RISCV::VFWCVT_RTZ_XU_F_V:
+ case RISCV::VFWCVT_RTZ_X_F_V:
+ case RISCV::VFWCVT_F_XU_V:
+ case RISCV::VFWCVT_F_X_V:
+ case RISCV::VFWCVT_F_F_V: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+ // EMUL=LMUL. Dest EEW=SEW/2. Source EEW=SEW EMUL=LMUL.
+ case RISCV::VFNCVT_XU_F_W:
+ case RISCV::VFNCVT_X_F_W:
+ case RISCV::VFNCVT_RTZ_XU_F_W:
+ case RISCV::VFNCVT_RTZ_X_F_W:
+ case RISCV::VFNCVT_F_XU_W:
+ case RISCV::VFNCVT_F_X_W:
+ case RISCV::VFNCVT_F_F_W:
+ case RISCV::VFNCVT_ROD_F_F_W: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW - 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? halfVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 14. Vector Reduction Operations
+ // 14.1. Vector Single-Width Integer Reduction Instructions
+ // We need to return Unknown since only element 0 of reduction is valid but it
+ // was generated by reducing over all of the input elements. There are 3
+ // vector sources for reductions. One for scalar, one for tail value, and one
+ // for the elements to reduce over. Only the one with the elements to reduce
+ // over obeys VL. The other two only read element 0 from the register.
+ case RISCV::VREDAND_VS:
+ case RISCV::VREDMAX_VS:
+ case RISCV::VREDMAXU_VS:
+ case RISCV::VREDMIN_VS:
+ case RISCV::VREDMINU_VS:
+ case RISCV::VREDOR_VS:
+ case RISCV::VREDSUM_VS:
+ case RISCV::VREDXOR_VS:
+ return OperandInfo(OperandInfo::State::Unknown);
+
+ // 14.2. Vector Widening Integer Reduction Instructions
+ // Dest EEW=2*SEW and EMUL=2*LMUL. Source EEW=SEW EMUL=LMUL. Source is zero
+ // extended to 2*SEW in order to generate 2*SEW Dest.
+ case RISCV::VWREDSUM_VS:
+ case RISCV::VWREDSUMU_VS: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+ // 14.3. Vector Single-Width Floating-Point Reduction Instructions
+ // EMUL=LMUL. EEW=SEW.
+ case RISCV::VFREDMAX_VS:
+ case RISCV::VFREDMIN_VS:
+ case RISCV::VFREDOSUM_VS:
+ case RISCV::VFREDUSUM_VS:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 14.4. Vector Widening Floating-Point Reduction Instructions
+ // Source EEW=SEW and EMUL=LMUL. Dest EEW=2*SEW and EMUL=2*LMUL.
+ case RISCV::VFWREDOSUM_VS:
+ case RISCV::VFWREDUSUM_VS: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW + 1 : MILog2SEW;
+ RISCVII::VLMUL EMUL = IsMODef ? twoTimesVLMUL(MIVLMul) : MIVLMul;
+ return OperandInfo(EMUL, Log2EEW);
+ }
+
+ // 15. Vector Mask Instructions
+ // 15.2. Vector count population in mask vcpop.m
+ // 15.3. vfirst find-first-set mask bit
+ // 15.4. vmsbf.m set-before-first mask bit
+ // 15.6. vmsof.m set-only-first mask bit
+ // EEW=1 and EMUL= (EEW/SEW)*LMUL
+ case RISCV::VMAND_MM:
+ case RISCV::VMNAND_MM:
+ case RISCV::VMANDN_MM:
+ case RISCV::VMXOR_MM:
+ case RISCV::VMOR_MM:
+ case RISCV::VMNOR_MM:
+ case RISCV::VMORN_MM:
+ case RISCV::VMXNOR_MM:
+ case RISCV::VCPOP_M:
+ case RISCV::VFIRST_M:
+ case RISCV::VMSBF_M:
+ case RISCV::VMSIF_M:
+ case RISCV::VMSOF_M: {
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+ }
+ // 15.8. Vector Iota Instruction
+ // Dest and Op1 EEW=SEW and EMUL=LMUL. Op2 EEW=1 and EMUL(EEW/SEW)*LMUL.
+ case RISCV::VIOTA_M: {
+ bool IsDefOrOp1 = IsMODef || MO.getOperandNo() == 1;
+ unsigned Log2EEW = IsDefOrOp1 ? 0 : MILog2SEW;
+ if (IsDefOrOp1)
+ return OperandInfo(MIVLMul, Log2EEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(MILog2SEW, MI), Log2EEW);
+ }
+ // 15.9. Vector Element Index Instruction
+ // Dest EEW=SEW EMUL=LMUL. Mask Operand EEW=1 and EMUL(EEW/SEW)*LMUL.
+ case RISCV::VID_V: {
+ unsigned Log2EEW = IsMODef ? MILog2SEW : 0;
+ if (IsMODef)
+ return OperandInfo(MIVLMul, Log2EEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(Log2EEW, MI), Log2EEW);
+ }
+ // 16. Vector Permutation Instructions
+ // 16.1. Integer Scalar Move Instructions
+ // 16.2. Floating-Point Scalar Move Instructions
+ // EMUL=LMUL. EEW=SEW.
+ case RISCV::VMV_X_S:
+ case RISCV::VMV_S_X:
+ case RISCV::VFMV_F_S:
+ case RISCV::VFMV_S_F:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 16.3. Vector Slide Instructions
+ // EMUL=LMUL. EEW=SEW.
+ case RISCV::VSLIDEUP_VI:
+ case RISCV::VSLIDEUP_VX:
+ case RISCV::VSLIDEDOWN_VI:
+ case RISCV::VSLIDEDOWN_VX:
+ case RISCV::VSLIDE1UP_VX:
+ case RISCV::VFSLIDE1UP_VF:
+ case RISCV::VSLIDE1DOWN_VX:
+ case RISCV::VFSLIDE1DOWN_VF:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 16.4. Vector Register Gather Instructions
+ // EMUL=LMUL. EEW=SEW. For mask operand, EMUL=1 and EEW=1.
+ case RISCV::VRGATHER_VI:
+ case RISCV::VRGATHER_VV:
+ case RISCV::VRGATHER_VX:
+ return OperandInfo(MIVLMul, MILog2SEW);
+ // Destination EMUL=LMUL and EEW=SEW. Op2 EEW=SEW and EMUL=LMUL. Op1 EEW=16
+ // and EMUL=(16/SEW)*LMUL.
+ case RISCV::VRGATHEREI16_VV: {
+ if (IsMODef || MO.getOperandNo() == 2)
+ return OperandInfo(MIVLMul, MILog2SEW);
+ return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(4, MI), 4);
+ }
+ // 16.5. Vector Compress Instruction
+ // EMUL=LMUL. EEW=SEW.
+ case RISCV::VCOMPRESS_VM:
+ return OperandInfo(MIVLMul, MILog2SEW);
+
+ // 16.6. Whole Vector Register Move
+ case RISCV::VMV1R_V:
+ case RISCV::VMV2R_V:
+ case RISCV::VMV4R_V:
+ case RISCV::VMV8R_V:
+ llvm_unreachable("These instructions don't have pseudo versions so they "
+ "don't have an SEW operand.");
+
+ default:
+ return OperandInfo(OperandInfo::State::Unknown);
+ }
+}
+
+/// Return true if this optimization should consider MI for VL reduction. This
+/// white-list approach simplifies this optimization for instructions that may
+/// have more complex semantics with relation to how it uses VL.
+static bool isSupportedInstr(const MachineInstr &MI) {
+ const RISCVVPseudosTable::PseudoInfo *RVV =
+ RISCVVPseudosTable::getPseudoInfo(MI.getOpcode());
+
+ if (!RVV)
+ return false;
+
+ switch (RVV->BaseInstr) {
+ case RISCV::VADD_VI:
+ case RISCV::VADD_VV:
+ case RISCV::VADD_VX:
+ case RISCV::VMUL_VV:
+ case RISCV::VMUL_VX:
+ case RISCV::VSLL_VI:
+ case RISCV::VSEXT_VF2:
+ case RISCV::VSEXT_VF4:
+ case RISCV::VSEXT_VF8:
+ case RISCV::VZEXT_VF2:
+ case RISCV::VZEXT_VF4:
+ case RISCV::VZEXT_VF8:
+ case RISCV::VMV_V_I:
+ case RISCV::VMV_V_X:
+ return true;
+ }
+
+ return false;
+}
+
+/// Return true if MO is a vector operand but is used as a scalar operand.
+static bool isVectorOpUsedAsScalarOp(MachineOperand &MO) {
+ MachineInstr *MI = MO.getParent();
+ const RISCVVPseudosTable::PseudoInfo *RVV =
+ RISCVVPseudosTable::getPseudoInfo(MI->getOpcode());
+
+ if (!RVV)
+ return false;
+
+ switch (RVV->BaseInstr) {
+ // Reductions only use vs1[0] of vs1
+ case RISCV::VREDAND_VS:
+ case RISCV::VREDMAX_VS:
+ case RISCV::VREDMAXU_VS:
+ case RISCV::VREDMIN_VS:
+ case RISCV::VREDMINU_VS:
+ case RISCV::VREDOR_VS:
+ case RISCV::VREDSUM_VS:
+ case RISCV::VREDXOR_VS:
+ case RISCV::VWREDSUM_VS:
+ case RISCV::VWREDSUMU_VS:
+ case RISCV::VFREDMAX_VS:
+ case RISCV::VFREDMIN_VS:
+ case RISCV::VFREDOSUM_VS:
+ case RISCV::VFREDUSUM_VS:
+ case RISCV::VFWREDOSUM_VS:
+ case RISCV::VFWREDUSUM_VS: {
+ return MO.getOperandNo() == 1;
+ }
+ default:
+ return false;
+ }
+}
+
+bool RISCVVLOptimizer::tryReduceVL(MachineInstr &OrigMI) {
+ SetVector<MachineInstr *> Worklist;
+ Worklist.insert(&OrigMI);
+
+ bool MadeChange = false;
+ while (!Worklist.empty()) {
+ MachineInstr &MI = *Worklist.pop_back_val();
+
+ std::optional<Register> CommonVL;
+ bool CanReduceVL = true;
+ for (auto &UserOp : MRI->use_operands(MI.getOperand(0).getReg())) {
+ const MachineInstr &UserMI = *UserOp.getParent();
+
+ // Instructions like reductions may use a vector register as a scalar
+ // register. In this case, we should treat it like a scalar register which
+ // does not impact the decision on whether to optimize VL.
+ if (isVectorOpUsedAsScalarOp(UserOp))
+ continue;
+
+ // Tied operands might pass through.
+ if (UserOp.isTied()) {
+ CanReduceVL = false;
+ break;
+ }
+
+ const MCInstrDesc &Desc = UserMI.getDesc();
+ if (!RISCVII::hasVLOp(Desc.TSFlags) || !RISCVII::hasSEWOp(Desc.TSFlags)) {
+ CanReduceVL = false;
+ break;
+ }
+
+ unsigned VLOpNum = RISCVII::getVLOpNum(Desc);
+ const MachineOperand &VLOp = UserMI.getOperand(VLOpNum);
+ // Looking for a register VL that isn't X0.
+ if (!VLOp.isReg() || VLOp.getReg() == RISCV::X0) {
+ CanReduceVL = false;
+ break;
+ }
+
+ if (!CommonVL) {
+ CommonVL = VLOp.getReg();
+ } else if (*CommonVL != VLOp.getReg()) {
+ CanReduceVL = false;
+ break;
+ }
+
+ // The SEW and LMUL of destination and source registers need to match.
+
+ // If the produced Dest is not a vector register, then it has no EEW or
+ // EMUL, so there is no need to check that producer and consumer LMUL and
+ // SEW match. We've already checked above that UserOp is a vector
+ // register.
+ if (!isVectorRegClass(MI.getOperand(0).getReg(), MRI))
+ continue;
+
+ OperandInfo ConsumerInfo = getOperandInfo(UserMI, UserOp, MRI);
+ OperandInfo ProducerInfo = getOperandInfo(MI, MI.getOperand(0), MRI);
+ if (ConsumerInfo.isUnknown() || ProducerInfo.isUnknown() ||
+ !OperandInfo::EMULAndEEWAreEqual(ConsumerInfo, ProducerInfo)) {
+ CanReduceVL = false;
+ break;
+ }
+ }
+
+ if (!CanReduceVL || !CommonVL)
+ continue;
+
+ if (!CommonVL->isVirtual())
+ continue;
+
+ const MachineInstr *VLMI = MRI->getVRegDef(*CommonVL);
+ if (!MDT->dominates(VLMI, &MI))
+ continue;
+
+ // All our checks passed. We can reduce VL.
+ unsigned VLOpNum = RISCVII::getVLOpNum(MI.getDesc());
+ MachineOperand &VLOp = MI.getOperand(VLOpNum);
+ VLOp.ChangeToRegister(*CommonVL, false);
+ MadeChange = true;
+
+ // Now add all inputs to this instruction to the worklist.
+ for (auto &Op : MI.operands()) {
+ if (!Op.isReg() || !Op.isUse() || !Op.getReg().isVirtual())
+ continue;
+
+ if (!isVectorRegClass(Op.getReg(), MRI))
+ continue;
+
+ MachineInstr *DefMI = MRI->getVRegDef(Op.getReg());
+
+ const MCInstrDesc &Desc = DefMI->getDesc();
+ if (!RISCVII::hasVLOp(Desc.TSFlags) || !RISCVII::hasSEWOp(Desc.TSFlags))
+ continue;
+
+ unsigned VLOpNum = RISCVII::getVLOpNum(Desc);
+ const MachineOperand &VLOp = DefMI->getOperand(VLOpNum);
+ if (!VLOp.isImm() || VLOp.getImm() != RISCV::VLMaxSentinel)
+ continue;
+
+ if (DefMI->getNumDefs() != 1)
+ continue;
+
+ if (!isSupportedInstr(*DefMI))
+ continue;
+
+ Worklist.insert(DefMI);
+ }
+ }
+
+ return MadeChange;
+}
+
+bool RISCVVLOptimizer::runOnMachineFunction(MachineFunction &MF) {
+ if (skipFunction(MF.getFunction()))
+ return false;
+
+ MRI = &MF.getRegInfo();
+ MDT = &getAnalysis<MachineDominatorTreeWrapperPass>().getDomTree();
+
+ const RISCVSubtarget &ST = MF.getSubtarget<RISCVSubtarget>();
+ if (!ST.hasVInstructions())
+ return false;
+
+ bool MadeChange = false;
+ for (MachineBasicBlock &MBB : MF) {
+ // Visit instructions in reverse order.
+ for (auto &MI : make_range(MBB.rbegin(), MBB.rend())) {
+ const MCInstrDesc &Desc = MI.getDesc();
+ if (!RISCVII::hasVLOp(Desc.TSFlags) || !RISCVII::hasSEWOp(Desc.TSFlags))
+ continue;
+
+ unsigned VLOpNum = RISCVII::getVLOpNum(Desc);
+ const MachineOperand &VLOp = MI.getOperand(VLOpNum);
+ if (!VLOp.isImm() || VLOp.getImm() != RISCV::VLMaxSentinel)
+ continue;
+
+ if (MI.getNumDefs() != 1)
+ continue;
+
+ // Only care about instructions that produce vectors.
+ const MachineOperand &Op0 = MI.getOperand(0);
+ if (!Op0.isReg() || !Op0.isDef() || !isVectorRegClass(Op0.getReg(), MRI))
+ continue;
+
+ // Some instructions that produce vectors have semantics that make it more
+ // difficult to determine whether the VL can be reduced. For example, some
+ // instructions, such as reductions, may write lanes past VL to a scalar
+ // register. Other instructions, such as some loads or stores, may write
+ // lower lanes using data from higher lanes. There may be other complex
+ // semantics not mentioned here that make it hard to determine whether
+ // the VL can be optimized. As a result, a white-list of supported
+ // instructions is used. Over time, more instructions cam be supported
+ // upon careful examination of their semantics under the logic in this
+ // optimization.
+ // TODO: Use a better approach than a white-list, such as adding
+ // properties to instructions using something like TSFlags.
+ if (!isSupportedInstr(MI))
+ continue;
+
+ MadeChange |= tryReduceVL(MI);
+ }
+ }
+
+ return MadeChange;
+}
diff --git a/llvm/test/CodeGen/RISCV/O3-pipeline.ll b/llvm/test/CodeGen/RISCV/O3-pipeline.ll
index 5d14d14d216244..4d24d3333f0467 100644
--- a/llvm/test/CodeGen/RISCV/O3-pipeline.ll
+++ b/llvm/test/CodeGen/RISCV/O3-pipeline.ll
@@ -118,6 +118,8 @@
; RV64-NEXT: RISC-V Optimize W Instructions
; CHECK-NEXT: RISC-V Pre-RA pseudo instruction expansion pass
; CHECK-NEXT: RISC-V Merge Base Offset
+; CHECK-NEXT: MachineDominator Tree Construction
+; CHECK-NEXT: RISC-V VL Optimizer
; CHECK-NEXT: RISC-V Insert Read/Write CSR Pass
; CHECK-NEXT: RISC-V Insert Write VXRM Pass
; CHECK-NEXT: RISC-V Landing Pad Setup
@@ -128,7 +130,6 @@
; CHECK-NEXT: Live Variable Analysis
; CHECK-NEXT: Eliminate PHI nodes for register allocation
; CHECK-NEXT: Two-Address instruction pass
-; CHECK-NEXT: MachineDominator Tree Construction
; CHECK-NEXT: Slot index numbering
; CHECK-NEXT: Live Interval Analysis
; CHECK-NEXT: Register Coalescer
diff --git a/llvm/test/CodeGen/RISCV/rvv/narrow-shift-extend.ll b/llvm/test/CodeGen/RISCV/rvv/narrow-shift-extend.ll
index e47517abacb4d3..63212e7cd5e004 100644
--- a/llvm/test/CodeGen/RISCV/rvv/narrow-shift-extend.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/narrow-shift-extend.ll
@@ -10,10 +10,10 @@ declare <vscale x 4 x i32> @llvm.riscv.vloxei.nxv4i32.nxv4i64(
define <vscale x 4 x i32> @test_vloxei(ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vloxei:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -30,10 +30,10 @@ entry:
define <vscale x 4 x i32> @test_vloxei2(ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vloxei2:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 14
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -50,10 +50,10 @@ entry:
define <vscale x 4 x i32> @test_vloxei3(ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vloxei3:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 26
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -74,9 +74,8 @@ define <vscale x 4 x i32> @test_vloxei4(ptr %ptr, <vscale x 4 x i8> %offset, <vs
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8, v0.t
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -100,10 +99,10 @@ declare <vscale x 4 x i32> @llvm.riscv.vloxei.nxv4i32.nxv4i16(
define <vscale x 4 x i32> @test_vloxei5(ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vloxei5:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e16, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
; CHECK-NEXT: vzext.vf2 v9, v8
; CHECK-NEXT: vsll.vi v10, v9, 12
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei16.v v8, (a0), v10
; CHECK-NEXT: ret
entry:
@@ -123,10 +122,10 @@ define <vscale x 4 x i32> @test_vloxei6(ptr %ptr, <vscale x 4 x i7> %offset, i64
; CHECK-NEXT: li a2, 127
; CHECK-NEXT: vsetvli a3, zero, e8, mf2, ta, ma
; CHECK-NEXT: vand.vx v8, v8, a2
-; CHECK-NEXT: vsetvli zero, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -146,8 +145,9 @@ define <vscale x 4 x i32> @test_vloxei7(ptr %ptr, <vscale x 4 x i1> %offset, i64
; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
; CHECK-NEXT: vmv.v.i v8, 0
; CHECK-NEXT: vmerge.vim v8, v8, 1, v0
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vsll.vi v12, v8, 2
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -172,10 +172,10 @@ declare <vscale x 4 x i32> @llvm.riscv.vloxei.mask.nxv4i32.nxv4i64(
define <vscale x 4 x i32> @test_vloxei_mask(ptr %ptr, <vscale x 4 x i8> %offset, <vscale x 4 x i1> %m, i64 %vl) {
; CHECK-LABEL: test_vloxei_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vloxei64.v v8, (a0), v12, v0.t
; CHECK-NEXT: ret
entry:
@@ -199,10 +199,10 @@ declare <vscale x 4 x i32> @llvm.riscv.vluxei.nxv4i32.nxv4i64(
define <vscale x 4 x i32> @test_vluxei(ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vluxei:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vluxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -227,10 +227,10 @@ declare <vscale x 4 x i32> @llvm.riscv.vluxei.mask.nxv4i32.nxv4i64(
define <vscale x 4 x i32> @test_vluxei_mask(ptr %ptr, <vscale x 4 x i8> %offset, <vscale x 4 x i1> %m, i64 %vl) {
; CHECK-LABEL: test_vluxei_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v8
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vluxei64.v v8, (a0), v12, v0.t
; CHECK-NEXT: ret
entry:
@@ -254,10 +254,10 @@ declare void @llvm.riscv.vsoxei.nxv4i32.nxv4i64(
define void @test_vsoxei(<vscale x 4 x i32> %val, ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vsoxei:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v10
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vsoxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -281,10 +281,10 @@ declare void @llvm.riscv.vsoxei.mask.nxv4i32.nxv4i64(
define void @test_vsoxei_mask(<vscale x 4 x i32> %val, ptr %ptr, <vscale x 4 x i8> %offset, <vscale x 4 x i1> %m, i64 %vl) {
; CHECK-LABEL: test_vsoxei_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v10
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vsoxei64.v v8, (a0), v12, v0.t
; CHECK-NEXT: ret
entry:
@@ -308,10 +308,10 @@ declare void @llvm.riscv.vsuxei.nxv4i32.nxv4i64(
define void @test_vsuxei(<vscale x 4 x i32> %val, ptr %ptr, <vscale x 4 x i8> %offset, i64 %vl) {
; CHECK-LABEL: test_vsuxei:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v10
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vsuxei64.v v8, (a0), v12
; CHECK-NEXT: ret
entry:
@@ -335,10 +335,10 @@ declare void @llvm.riscv.vsuxei.mask.nxv4i32.nxv4i64(
define void @test_vsuxei_mask(<vscale x 4 x i32> %val, ptr %ptr, <vscale x 4 x i8> %offset, <vscale x 4 x i1> %m, i64 %vl) {
; CHECK-LABEL: test_vsuxei_mask:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e64, m4, ta, ma
; CHECK-NEXT: vzext.vf8 v12, v10
; CHECK-NEXT: vsll.vi v12, v12, 4
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vsuxei64.v v8, (a0), v12, v0.t
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
index eb8c58d2d37790..073a0aecb8f732 100644
--- a/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
@@ -184,9 +184,8 @@ define <vscale x 1 x i1> @icmp_uge_vv_nxv1i8(<vscale x 1 x i8> %va, <vscale x 1
define <vscale x 1 x i1> @icmp_uge_vx_nxv1i8(<vscale x 1 x i8> %va, i8 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_uge_vx_nxv1i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf8, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsleu.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i8> poison, i8 %b, i32 0
@@ -348,9 +347,8 @@ define <vscale x 1 x i1> @icmp_sge_vv_nxv1i8(<vscale x 1 x i8> %va, <vscale x 1
define <vscale x 1 x i1> @icmp_sge_vx_nxv1i8(<vscale x 1 x i8> %va, i8 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sge_vx_nxv1i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf8, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsle.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i8> poison, i8 %b, i32 0
@@ -470,9 +468,8 @@ define <vscale x 1 x i1> @icmp_sle_vx_nxv1i8(<vscale x 1 x i8> %va, i8 %b, <vsca
define <vscale x 1 x i1> @icmp_sle_vx_swap_nxv1i8(<vscale x 1 x i8> %va, i8 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sle_vx_swap_nxv1i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf8, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsle.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i8> poison, i8 %b, i32 0
@@ -764,9 +761,8 @@ define <vscale x 8 x i1> @icmp_uge_vv_nxv8i8(<vscale x 8 x i8> %va, <vscale x 8
define <vscale x 8 x i1> @icmp_uge_vx_nxv8i8(<vscale x 8 x i8> %va, i8 %b, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_uge_vx_nxv8i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsleu.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 8 x i8> poison, i8 %b, i32 0
@@ -928,9 +924,8 @@ define <vscale x 8 x i1> @icmp_sge_vv_nxv8i8(<vscale x 8 x i8> %va, <vscale x 8
define <vscale x 8 x i1> @icmp_sge_vx_nxv8i8(<vscale x 8 x i8> %va, i8 %b, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sge_vx_nxv8i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsle.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 8 x i8> poison, i8 %b, i32 0
@@ -1050,9 +1045,8 @@ define <vscale x 8 x i1> @icmp_sle_vx_nxv8i8(<vscale x 8 x i8> %va, i8 %b, <vsca
define <vscale x 8 x i1> @icmp_sle_vx_swap_nxv8i8(<vscale x 8 x i8> %va, i8 %b, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sle_vx_swap_nxv8i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsle.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 8 x i8> poison, i8 %b, i32 0
@@ -1375,9 +1369,8 @@ define <vscale x 1 x i1> @icmp_uge_vv_nxv1i32(<vscale x 1 x i32> %va, <vscale x
define <vscale x 1 x i1> @icmp_uge_vx_nxv1i32(<vscale x 1 x i32> %va, i32 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_uge_vx_nxv1i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, mf2, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsleu.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i32> poison, i32 %b, i32 0
@@ -1539,9 +1532,8 @@ define <vscale x 1 x i1> @icmp_sge_vv_nxv1i32(<vscale x 1 x i32> %va, <vscale x
define <vscale x 1 x i1> @icmp_sge_vx_nxv1i32(<vscale x 1 x i32> %va, i32 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sge_vx_nxv1i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, mf2, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsle.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i32> poison, i32 %b, i32 0
@@ -1661,9 +1653,8 @@ define <vscale x 1 x i1> @icmp_sle_vx_nxv1i32(<vscale x 1 x i32> %va, i32 %b, <v
define <vscale x 1 x i1> @icmp_sle_vx_swap_nxv1i32(<vscale x 1 x i32> %va, i32 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sle_vx_swap_nxv1i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, mf2, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vmsle.vv v0, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i32> poison, i32 %b, i32 0
@@ -1885,9 +1876,8 @@ define <vscale x 8 x i1> @icmp_uge_vv_nxv8i32(<vscale x 8 x i32> %va, <vscale x
define <vscale x 8 x i1> @icmp_uge_vx_nxv8i32(<vscale x 8 x i32> %va, i32 %b, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_uge_vx_nxv8i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; CHECK-NEXT: vmv.v.x v16, a0
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vmv.v.x v16, a0
; CHECK-NEXT: vmsleu.vv v12, v16, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: ret
@@ -2064,9 +2054,8 @@ define <vscale x 8 x i1> @icmp_sge_vv_nxv8i32(<vscale x 8 x i32> %va, <vscale x
define <vscale x 8 x i1> @icmp_sge_vx_nxv8i32(<vscale x 8 x i32> %va, i32 %b, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sge_vx_nxv8i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; CHECK-NEXT: vmv.v.x v16, a0
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vmv.v.x v16, a0
; CHECK-NEXT: vmsle.vv v12, v16, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: ret
@@ -2197,9 +2186,8 @@ define <vscale x 8 x i1> @icmp_sle_vx_nxv8i32(<vscale x 8 x i32> %va, i32 %b, <v
define <vscale x 8 x i1> @icmp_sle_vx_swap_nxv8i32(<vscale x 8 x i32> %va, i32 %b, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: icmp_sle_vx_swap_nxv8i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; CHECK-NEXT: vmv.v.x v16, a0
; CHECK-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; CHECK-NEXT: vmv.v.x v16, a0
; CHECK-NEXT: vmsle.vv v12, v16, v8, v0.t
; CHECK-NEXT: vmv1r.v v0, v12
; CHECK-NEXT: ret
@@ -2633,9 +2621,8 @@ define <vscale x 1 x i1> @icmp_uge_vx_nxv1i64(<vscale x 1 x i64> %va, i64 %b, <v
;
; RV64-LABEL: icmp_uge_vx_nxv1i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m1, ta, ma
-; RV64-NEXT: vmv.v.x v9, a0
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; RV64-NEXT: vmv.v.x v9, a0
; RV64-NEXT: vmsleu.vv v0, v9, v8, v0.t
; RV64-NEXT: ret
%elt.head = insertelement <vscale x 1 x i64> poison, i64 %b, i32 0
@@ -2881,9 +2868,8 @@ define <vscale x 1 x i1> @icmp_sge_vx_nxv1i64(<vscale x 1 x i64> %va, i64 %b, <v
;
; RV64-LABEL: icmp_sge_vx_nxv1i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m1, ta, ma
-; RV64-NEXT: vmv.v.x v9, a0
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; RV64-NEXT: vmv.v.x v9, a0
; RV64-NEXT: vmsle.vv v0, v9, v8, v0.t
; RV64-NEXT: ret
%elt.head = insertelement <vscale x 1 x i64> poison, i64 %b, i32 0
@@ -3073,9 +3059,8 @@ define <vscale x 1 x i1> @icmp_sle_vx_swap_nxv1i64(<vscale x 1 x i64> %va, i64 %
;
; RV64-LABEL: icmp_sle_vx_swap_nxv1i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m1, ta, ma
-; RV64-NEXT: vmv.v.x v9, a0
; RV64-NEXT: vsetvli zero, a1, e64, m1, ta, ma
+; RV64-NEXT: vmv.v.x v9, a0
; RV64-NEXT: vmsle.vv v0, v9, v8, v0.t
; RV64-NEXT: ret
%elt.head = insertelement <vscale x 1 x i64> poison, i64 %b, i32 0
@@ -3402,9 +3387,8 @@ define <vscale x 8 x i1> @icmp_uge_vx_nxv8i64(<vscale x 8 x i64> %va, i64 %b, <v
;
; RV64-LABEL: icmp_uge_vx_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: vmsleu.vv v16, v24, v8, v0.t
; RV64-NEXT: vmv1r.v v0, v16
; RV64-NEXT: ret
@@ -3671,9 +3655,8 @@ define <vscale x 8 x i1> @icmp_sge_vx_nxv8i64(<vscale x 8 x i64> %va, i64 %b, <v
;
; RV64-LABEL: icmp_sge_vx_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: vmsle.vv v16, v24, v8, v0.t
; RV64-NEXT: vmv1r.v v0, v16
; RV64-NEXT: ret
@@ -3879,9 +3862,8 @@ define <vscale x 8 x i1> @icmp_sle_vx_swap_nxv8i64(<vscale x 8 x i64> %va, i64 %
;
; RV64-LABEL: icmp_sle_vx_swap_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: vmsle.vv v16, v24, v8, v0.t
; RV64-NEXT: vmv1r.v v0, v16
; RV64-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vdiv-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vdiv-vp.ll
index a4b7ca7f39768f..d276631c5883e8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vdiv-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vdiv-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vdiv_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <v
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v8, v8, 1, v0.t
; CHECK-NEXT: vsra.vi v8, v8, 1, v0.t
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v9, v9, 1, v0.t
; CHECK-NEXT: vsra.vi v9, v9, 1, v0.t
; CHECK-NEXT: vdiv.vv v8, v8, v9, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vdivu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vdivu-vp.ll
index 67c3f9dbf2869a..7794e031d5c677 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vdivu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vdivu-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vdivu_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <
; CHECK-NEXT: li a2, 127
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v8, v8, a2, v0.t
-; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a2, v0.t
; CHECK-NEXT: vdivu.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll
index 80ada4670562d7..d4930bd2ae0396 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll
@@ -143,9 +143,8 @@ define <vscale x 1 x float> @vfmacc_vf_nxv1f32(<vscale x 1 x half> %va, half %b,
; ZVFHMIN-LABEL: vfmacc_vf_nxv1f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -170,9 +169,8 @@ define <vscale x 1 x float> @vfmacc_vf_nxv1f32_commute(<vscale x 1 x half> %va,
; ZVFHMIN-LABEL: vfmacc_vf_nxv1f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -198,9 +196,8 @@ define <vscale x 1 x float> @vfmacc_vf_nxv1f32_unmasked(<vscale x 1 x half> %va,
; ZVFHMIN-LABEL: vfmacc_vf_nxv1f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -225,9 +222,8 @@ define <vscale x 1 x float> @vfmacc_vf_nxv1f32_tu(<vscale x 1 x half> %va, half
; ZVFHMIN-LABEL: vfmacc_vf_nxv1f32_tu:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, tu, mu
@@ -254,9 +250,8 @@ define <vscale x 1 x float> @vfmacc_vf_nxv1f32_commute_tu(<vscale x 1 x half> %v
; ZVFHMIN-LABEL: vfmacc_vf_nxv1f32_commute_tu:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, tu, mu
@@ -283,9 +278,8 @@ define <vscale x 1 x float> @vfmacc_vf_nxv1f32_unmasked_tu(<vscale x 1 x half> %
; ZVFHMIN-LABEL: vfmacc_vf_nxv1f32_unmasked_tu:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, tu, ma
@@ -362,9 +356,8 @@ define <vscale x 2 x float> @vfmacc_vf_nxv2f32(<vscale x 2 x half> %va, half %b,
; ZVFHMIN-LABEL: vfmacc_vf_nxv2f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -389,9 +382,8 @@ define <vscale x 2 x float> @vfmacc_vf_nxv2f32_unmasked(<vscale x 2 x half> %va,
; ZVFHMIN-LABEL: vfmacc_vf_nxv2f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -468,9 +460,8 @@ define <vscale x 4 x float> @vfmacc_vf_nxv4f32(<vscale x 4 x half> %va, half %b,
; ZVFHMIN-LABEL: vfmacc_vf_nxv4f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -495,9 +486,8 @@ define <vscale x 4 x float> @vfmacc_vf_nxv4f32_unmasked(<vscale x 4 x half> %va,
; ZVFHMIN-LABEL: vfmacc_vf_nxv4f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -574,9 +564,8 @@ define <vscale x 8 x float> @vfmacc_vf_nxv8f32(<vscale x 8 x half> %va, half %b,
; ZVFHMIN-LABEL: vfmacc_vf_nxv8f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -601,9 +590,8 @@ define <vscale x 8 x float> @vfmacc_vf_nxv8f32_unmasked(<vscale x 8 x half> %va,
; ZVFHMIN-LABEL: vfmacc_vf_nxv8f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -694,9 +682,8 @@ define <vscale x 16 x float> @vfmacc_vf_nxv16f32(<vscale x 16 x half> %va, half
; ZVFHMIN-LABEL: vfmacc_vf_nxv16f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v4, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -721,9 +708,8 @@ define <vscale x 16 x float> @vfmacc_vf_nxv16f32_unmasked(<vscale x 16 x half> %
; ZVFHMIN-LABEL: vfmacc_vf_nxv16f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v24, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v24, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v0, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v24
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwmsac-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwmsac-vp.ll
index c92a79e49c1642..94b80075ac14c5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwmsac-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwmsac-vp.ll
@@ -120,9 +120,8 @@ define <vscale x 1 x float> @vmfsac_vf_nxv1f32(<vscale x 1 x half> %a, half %b,
; ZVFHMIN-LABEL: vmfsac_vf_nxv1f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -148,9 +147,8 @@ define <vscale x 1 x float> @vmfsac_vf_nxv1f32_commute(<vscale x 1 x half> %a, h
; ZVFHMIN-LABEL: vmfsac_vf_nxv1f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -177,9 +175,8 @@ define <vscale x 1 x float> @vmfsac_vf_nxv1f32_unmasked(<vscale x 1 x half> %a,
; ZVFHMIN-LABEL: vmfsac_vf_nxv1f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -255,9 +252,8 @@ define <vscale x 2 x float> @vmfsac_vf_nxv2f32(<vscale x 2 x half> %a, half %b,
; ZVFHMIN-LABEL: vmfsac_vf_nxv2f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -283,9 +279,8 @@ define <vscale x 2 x float> @vmfsac_vf_nxv2f32_commute(<vscale x 2 x half> %a, h
; ZVFHMIN-LABEL: vmfsac_vf_nxv2f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -312,9 +307,8 @@ define <vscale x 2 x float> @vmfsac_vf_nxv2f32_unmasked(<vscale x 2 x half> %a,
; ZVFHMIN-LABEL: vmfsac_vf_nxv2f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -392,9 +386,8 @@ define <vscale x 4 x float> @vmfsac_vf_nxv4f32(<vscale x 4 x half> %a, half %b,
; ZVFHMIN-LABEL: vmfsac_vf_nxv4f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -420,9 +413,8 @@ define <vscale x 4 x float> @vmfsac_vf_nxv4f32_commute(<vscale x 4 x half> %a, h
; ZVFHMIN-LABEL: vmfsac_vf_nxv4f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v9, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v9, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v9, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -449,9 +441,8 @@ define <vscale x 4 x float> @vmfsac_vf_nxv4f32_unmasked(<vscale x 4 x half> %a,
; ZVFHMIN-LABEL: vmfsac_vf_nxv4f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -529,9 +520,8 @@ define <vscale x 8 x float> @vmfsac_vf_nxv8f32(<vscale x 8 x half> %a, half %b,
; ZVFHMIN-LABEL: vmfsac_vf_nxv8f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -557,9 +547,8 @@ define <vscale x 8 x float> @vmfsac_vf_nxv8f32_commute(<vscale x 8 x half> %a, h
; ZVFHMIN-LABEL: vmfsac_vf_nxv8f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -586,9 +575,8 @@ define <vscale x 8 x float> @vmfsac_vf_nxv8f32_unmasked(<vscale x 8 x half> %a,
; ZVFHMIN-LABEL: vmfsac_vf_nxv8f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll
index 6ea58a4e768736..3fc58acd504e52 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll
@@ -71,9 +71,8 @@ define <vscale x 1 x float> @vfnmacc_vf_nxv1f32(<vscale x 1 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv1f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -101,9 +100,8 @@ define <vscale x 1 x float> @vfnmacc_vf_nxv1f32_commute(<vscale x 1 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv1f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -131,9 +129,8 @@ define <vscale x 1 x float> @vfnmacc_vf_nxv1f32_unmasked(<vscale x 1 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv1f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -212,9 +209,8 @@ define <vscale x 2 x float> @vfnmacc_vf_nxv2f32(<vscale x 2 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv2f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -242,9 +238,8 @@ define <vscale x 2 x float> @vfnmacc_vf_nxv2f32_commute(<vscale x 2 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv2f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -272,9 +267,8 @@ define <vscale x 2 x float> @vfnmacc_vf_nxv2f32_unmasked(<vscale x 2 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv2f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -355,9 +349,8 @@ define <vscale x 4 x float> @vfnmacc_vf_nxv4f32(<vscale x 4 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv4f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -385,9 +378,8 @@ define <vscale x 4 x float> @vfnmacc_vf_nxv4f32_commute(<vscale x 4 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv4f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v9, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v9, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v9, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -415,9 +407,8 @@ define <vscale x 4 x float> @vfnmacc_vf_nxv4f32_unmasked(<vscale x 4 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv4f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -498,9 +489,8 @@ define <vscale x 8 x float> @vfnmacc_vf_nxv8f32(<vscale x 8 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv8f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -528,9 +518,8 @@ define <vscale x 8 x float> @vfnmacc_vf_nxv8f32_commute(<vscale x 8 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv8f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -558,9 +547,8 @@ define <vscale x 8 x float> @vfnmacc_vf_nxv8f32_unmasked(<vscale x 8 x half> %a,
; ZVFHMIN-LABEL: vfnmacc_vf_nxv8f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -655,9 +643,8 @@ define <vscale x 16 x float> @vfnmacc_vf_nxv16f32(<vscale x 16 x half> %a, half
; ZVFHMIN-LABEL: vfnmacc_vf_nxv16f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v4, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -685,9 +672,8 @@ define <vscale x 16 x float> @vfnmacc_vf_nxv16f32_commute(<vscale x 16 x half> %
; ZVFHMIN-LABEL: vfnmacc_vf_nxv16f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v4, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -715,9 +701,8 @@ define <vscale x 16 x float> @vfnmacc_vf_nxv16f32_unmasked(<vscale x 16 x half>
; ZVFHMIN-LABEL: vfnmacc_vf_nxv16f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v24, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v24, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v0, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v24
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll
index 0afbe58038c76f..692a22bde48822 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll
@@ -69,9 +69,8 @@ define <vscale x 1 x float> @vfnmsac_vf_nxv1f32(<vscale x 1 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv1f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -97,9 +96,8 @@ define <vscale x 1 x float> @vfnmsac_vf_nxv1f32_commute(<vscale x 1 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv1f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -126,9 +124,8 @@ define <vscale x 1 x float> @vfnmsac_vf_nxv1f32_unmasked(<vscale x 1 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv1f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
@@ -204,9 +201,8 @@ define <vscale x 2 x float> @vfnmsac_vf_nxv2f32(<vscale x 2 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv2f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -232,9 +228,8 @@ define <vscale x 2 x float> @vfnmsac_vf_nxv2f32_commute(<vscale x 2 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv2f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v11, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v10, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v11, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -261,9 +256,8 @@ define <vscale x 2 x float> @vfnmsac_vf_nxv2f32_unmasked(<vscale x 2 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv2f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v11, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v10
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma
@@ -341,9 +335,8 @@ define <vscale x 4 x float> @vfnmsac_vf_nxv4f32(<vscale x 4 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv4f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -369,9 +362,8 @@ define <vscale x 4 x float> @vfnmsac_vf_nxv4f32_commute(<vscale x 4 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv4f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v9, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v9, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v12, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v9, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -398,9 +390,8 @@ define <vscale x 4 x float> @vfnmsac_vf_nxv4f32_unmasked(<vscale x 4 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv4f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m1, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v12, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v14, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v12
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma
@@ -478,9 +469,8 @@ define <vscale x 8 x float> @vfnmsac_vf_nxv8f32(<vscale x 8 x half> %a, half %b,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv8f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -506,9 +496,8 @@ define <vscale x 8 x float> @vfnmsac_vf_nxv8f32_commute(<vscale x 8 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv8f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v10, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v16, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v10, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -535,9 +524,8 @@ define <vscale x 8 x float> @vfnmsac_vf_nxv8f32_unmasked(<vscale x 8 x half> %a,
; ZVFHMIN-LABEL: vfnmsac_vf_nxv8f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v16, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v20, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma
@@ -629,9 +617,8 @@ define <vscale x 16 x float> @vfnmsac_vf_nxv16f32(<vscale x 16 x half> %a, half
; ZVFHMIN-LABEL: vfnmsac_vf_nxv16f32:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v4, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -657,9 +644,8 @@ define <vscale x 16 x float> @vfnmsac_vf_nxv16f32_commute(<vscale x 16 x half> %
; ZVFHMIN-LABEL: vfnmsac_vf_nxv16f32_commute:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v4, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v24, v8, v0.t
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v4, v0.t
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
@@ -686,9 +672,8 @@ define <vscale x 16 x float> @vfnmsac_vf_nxv16f32_unmasked(<vscale x 16 x half>
; ZVFHMIN-LABEL: vfnmsac_vf_nxv16f32_unmasked:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
-; ZVFHMIN-NEXT: vsetvli a2, zero, e16, m4, ta, ma
-; ZVFHMIN-NEXT: vmv.v.x v24, a1
; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma
+; ZVFHMIN-NEXT: vmv.v.x v24, a1
; ZVFHMIN-NEXT: vfwcvt.f.f.v v0, v8
; ZVFHMIN-NEXT: vfwcvt.f.f.v v8, v24
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll
index f65e708f5303cc..cc02f54cda21a1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vmax_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <v
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v8, v8, 1, v0.t
; CHECK-NEXT: vsra.vi v8, v8, 1, v0.t
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v9, v9, 1, v0.t
; CHECK-NEXT: vsra.vi v9, v9, 1, v0.t
; CHECK-NEXT: vmax.vv v8, v8, v9, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll
index df1ad58e5ecbde..45dc63442f157f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vmaxu_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <
; CHECK-NEXT: li a2, 127
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v8, v8, a2, v0.t
-; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a2, v0.t
; CHECK-NEXT: vmaxu.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll
index 0bf0638633aa45..f1edd99aa0b0dc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vmin_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <v
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v8, v8, 1, v0.t
; CHECK-NEXT: vsra.vi v8, v8, 1, v0.t
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v9, v9, 1, v0.t
; CHECK-NEXT: vsra.vi v9, v9, 1, v0.t
; CHECK-NEXT: vmin.vv v8, v8, v9, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll
index 2acebdf2e646d4..9929cc3b784556 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vminu_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <
; CHECK-NEXT: li a2, 127
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v8, v8, a2, v0.t
-; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a2, v0.t
; CHECK-NEXT: vminu.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmul-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vmul-vp.ll
index 51026cbcb8c4bf..0adee0157c8f38 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmul-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmul-vp.ll
@@ -1440,11 +1440,10 @@ define <vscale x 8 x i64> @vmul_vadd_vx_nxv8i64_unmasked(<vscale x 8 x i64> %va,
; CHECK-LABEL: vmul_vadd_vx_nxv8i64_unmasked:
; CHECK: # %bb.0:
; CHECK-NEXT: li a1, 21
-; CHECK-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; CHECK-NEXT: vmv.v.x v16, a1
-; CHECK-NEXT: li a1, 7
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
-; CHECK-NEXT: vmadd.vx v8, a1, v16
+; CHECK-NEXT: vmv.v.x v16, a1
+; CHECK-NEXT: li a0, 7
+; CHECK-NEXT: vmadd.vx v8, a0, v16
; CHECK-NEXT: ret
%head = insertelement <vscale x 8 x i1> poison, i1 true, i32 0
%m = shufflevector <vscale x 8 x i1> %head, <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll
index c0d7ecf74956b9..4d0899b3282c58 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll
@@ -231,17 +231,17 @@ define <vscale x 8 x i8> @vpgather_nxv8i8(<vscale x 8 x ptr> %ptrs, <vscale x 8
define <vscale x 8 x i8> @vpgather_baseidx_nxv8i8(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
-; RV32-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
-; RV64-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i8, ptr %base, <vscale x 8 x i8> %idxs
@@ -264,18 +264,18 @@ define <vscale x 32 x i8> @vpgather_baseidx_nxv32i8(ptr %base, <vscale x 32 x i8
; RV32-NEXT: srli a3, a3, 2
; RV32-NEXT: vsetvli a5, zero, e8, mf2, ta, ma
; RV32-NEXT: vslidedown.vx v0, v0, a3
-; RV32-NEXT: vsetvli a3, zero, e32, m8, ta, ma
+; RV32-NEXT: vsetvli zero, a4, e32, m8, ta, ma
; RV32-NEXT: vsext.vf4 v16, v10
-; RV32-NEXT: vsetvli zero, a4, e8, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e8, m2, ta, ma
; RV32-NEXT: vluxei32.v v10, (a0), v16, v0.t
; RV32-NEXT: bltu a1, a2, .LBB12_2
; RV32-NEXT: # %bb.1:
; RV32-NEXT: mv a1, a2
; RV32-NEXT: .LBB12_2:
-; RV32-NEXT: vsetvli a2, zero, e32, m8, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m8, ta, ma
; RV32-NEXT: vsext.vf4 v16, v8
; RV32-NEXT: vmv1r.v v0, v12
-; RV32-NEXT: vsetvli zero, a1, e8, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e8, m2, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
@@ -298,18 +298,18 @@ define <vscale x 32 x i8> @vpgather_baseidx_nxv32i8(ptr %base, <vscale x 32 x i8
; RV64-NEXT: srli a4, a2, 3
; RV64-NEXT: vsetvli a7, zero, e8, mf4, ta, ma
; RV64-NEXT: vslidedown.vx v0, v13, a4
-; RV64-NEXT: vsetvli a7, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a6, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v11
-; RV64-NEXT: vsetvli zero, a6, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV64-NEXT: vluxei64.v v11, (a0), v16, v0.t
; RV64-NEXT: bltu a5, a2, .LBB12_2
; RV64-NEXT: # %bb.1:
; RV64-NEXT: mv a5, a2
; RV64-NEXT: .LBB12_2:
-; RV64-NEXT: vsetvli a6, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a5, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v10
; RV64-NEXT: vmv1r.v v0, v13
-; RV64-NEXT: vsetvli zero, a5, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV64-NEXT: vluxei64.v v10, (a0), v16, v0.t
; RV64-NEXT: bltu a1, a3, .LBB12_4
; RV64-NEXT: # %bb.3:
@@ -321,18 +321,18 @@ define <vscale x 32 x i8> @vpgather_baseidx_nxv32i8(ptr %base, <vscale x 32 x i8
; RV64-NEXT: and a3, a5, a3
; RV64-NEXT: vsetvli a5, zero, e8, mf4, ta, ma
; RV64-NEXT: vslidedown.vx v0, v12, a4
-; RV64-NEXT: vsetvli a4, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a3, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v9
-; RV64-NEXT: vsetvli zero, a3, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV64-NEXT: vluxei64.v v9, (a0), v16, v0.t
; RV64-NEXT: bltu a1, a2, .LBB12_6
; RV64-NEXT: # %bb.5:
; RV64-NEXT: mv a1, a2
; RV64-NEXT: .LBB12_6:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vmv1r.v v0, v12
-; RV64-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i8, ptr %base, <vscale x 32 x i8> %idxs
@@ -525,19 +525,19 @@ define <vscale x 8 x i16> @vpgather_nxv8i16(<vscale x 8 x ptr> %ptrs, <vscale x
define <vscale x 8 x i16> @vpgather_baseidx_nxv8i8_nxv8i16(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8_nxv8i16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8_nxv8i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i16, ptr %base, <vscale x 8 x i8> %idxs
@@ -548,19 +548,19 @@ define <vscale x 8 x i16> @vpgather_baseidx_nxv8i8_nxv8i16(ptr %base, <vscale x
define <vscale x 8 x i16> @vpgather_baseidx_sext_nxv8i8_nxv8i16(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8i16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i16>
@@ -602,10 +602,10 @@ define <vscale x 8 x i16> @vpgather_baseidx_nxv8i16(ptr %base, <vscale x 8 x i16
;
; RV64-LABEL: vpgather_baseidx_nxv8i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i16, ptr %base, <vscale x 8 x i16> %idxs
@@ -751,19 +751,18 @@ define <vscale x 8 x i32> @vpgather_nxv8i32(<vscale x 8 x ptr> %ptrs, <vscale x
define <vscale x 8 x i32> @vpgather_baseidx_nxv8i8_nxv8i32(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <vscale x 8 x i8> %idxs
@@ -774,19 +773,18 @@ define <vscale x 8 x i32> @vpgather_baseidx_nxv8i8_nxv8i32(ptr %base, <vscale x
define <vscale x 8 x i32> @vpgather_baseidx_sext_nxv8i8_nxv8i32(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -798,19 +796,19 @@ define <vscale x 8 x i32> @vpgather_baseidx_sext_nxv8i8_nxv8i32(ptr %base, <vsca
define <vscale x 8 x i32> @vpgather_baseidx_zext_nxv8i8_nxv8i32(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v10, v8
; RV32-NEXT: vsll.vi v12, v10, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV32-NEXT: vluxei16.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v10, v8
; RV64-NEXT: vsll.vi v12, v10, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei16.v v8, (a0), v12, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -822,19 +820,18 @@ define <vscale x 8 x i32> @vpgather_baseidx_zext_nxv8i8_nxv8i32(ptr %base, <vsca
define <vscale x 8 x i32> @vpgather_baseidx_nxv8i16_nxv8i32(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i16_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i16_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <vscale x 8 x i16> %idxs
@@ -845,19 +842,18 @@ define <vscale x 8 x i32> @vpgather_baseidx_nxv8i16_nxv8i32(ptr %base, <vscale x
define <vscale x 8 x i32> @vpgather_baseidx_sext_nxv8i16_nxv8i32(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -869,19 +865,17 @@ define <vscale x 8 x i32> @vpgather_baseidx_sext_nxv8i16_nxv8i32(ptr %base, <vsc
define <vscale x 8 x i32> @vpgather_baseidx_zext_nxv8i16_nxv8i32(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v12, v8
; RV64-NEXT: vsll.vi v8, v12, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -893,18 +887,17 @@ define <vscale x 8 x i32> @vpgather_baseidx_zext_nxv8i16_nxv8i32(ptr %base, <vsc
define <vscale x 8 x i32> @vpgather_baseidx_nxv8i32(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; RV32-NEXT: vsll.vi v8, v8, 2
; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsll.vi v8, v8, 2
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <vscale x 8 x i32> %idxs
@@ -1008,19 +1001,18 @@ define <vscale x 8 x i64> @vpgather_nxv8i64(<vscale x 8 x ptr> %ptrs, <vscale x
define <vscale x 8 x i64> @vpgather_baseidx_nxv8i8_nxv8i64(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i8> %idxs
@@ -1031,19 +1023,18 @@ define <vscale x 8 x i64> @vpgather_baseidx_nxv8i8_nxv8i64(ptr %base, <vscale x
define <vscale x 8 x i64> @vpgather_baseidx_sext_nxv8i8_nxv8i64(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -1055,19 +1046,19 @@ define <vscale x 8 x i64> @vpgather_baseidx_sext_nxv8i8_nxv8i64(ptr %base, <vsca
define <vscale x 8 x i64> @vpgather_baseidx_zext_nxv8i8_nxv8i64(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v10, v8
; RV32-NEXT: vsll.vi v16, v10, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei16.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v10, v8
; RV64-NEXT: vsll.vi v16, v10, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vluxei16.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -1079,19 +1070,18 @@ define <vscale x 8 x i64> @vpgather_baseidx_zext_nxv8i8_nxv8i64(ptr %base, <vsca
define <vscale x 8 x i64> @vpgather_baseidx_nxv8i16_nxv8i64(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i16_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i16_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i16> %idxs
@@ -1102,19 +1092,18 @@ define <vscale x 8 x i64> @vpgather_baseidx_nxv8i16_nxv8i64(ptr %base, <vscale x
define <vscale x 8 x i64> @vpgather_baseidx_sext_nxv8i16_nxv8i64(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -1126,19 +1115,19 @@ define <vscale x 8 x i64> @vpgather_baseidx_sext_nxv8i16_nxv8i64(ptr %base, <vsc
define <vscale x 8 x i64> @vpgather_baseidx_zext_nxv8i16_nxv8i64(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v12, v8
; RV64-NEXT: vsll.vi v16, v12, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -1150,18 +1139,17 @@ define <vscale x 8 x i64> @vpgather_baseidx_zext_nxv8i16_nxv8i64(ptr %base, <vsc
define <vscale x 8 x i64> @vpgather_baseidx_nxv8i32_nxv8i64(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i32_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i32_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i32> %idxs
@@ -1172,18 +1160,17 @@ define <vscale x 8 x i64> @vpgather_baseidx_nxv8i32_nxv8i64(ptr %base, <vscale x
define <vscale x 8 x i64> @vpgather_baseidx_sext_nxv8i32_nxv8i64(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i32_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i32_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -1195,18 +1182,17 @@ define <vscale x 8 x i64> @vpgather_baseidx_sext_nxv8i32_nxv8i64(ptr %base, <vsc
define <vscale x 8 x i64> @vpgather_baseidx_zext_nxv8i32_nxv8i64(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i32_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i32_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vzext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -1220,16 +1206,16 @@ define <vscale x 8 x i64> @vpgather_baseidx_nxv8i64(ptr %base, <vscale x 8 x i64
; RV32: # %bb.0:
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v16, v8, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vsll.vi v8, v8, 3
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsll.vi v8, v8, 3
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i64> %idxs
@@ -1338,19 +1324,19 @@ define <vscale x 8 x half> @vpgather_nxv8f16(<vscale x 8 x ptr> %ptrs, <vscale x
define <vscale x 8 x half> @vpgather_baseidx_nxv8i8_nxv8f16(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8_nxv8f16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8_nxv8f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds half, ptr %base, <vscale x 8 x i8> %idxs
@@ -1361,19 +1347,19 @@ define <vscale x 8 x half> @vpgather_baseidx_nxv8i8_nxv8f16(ptr %base, <vscale x
define <vscale x 8 x half> @vpgather_baseidx_sext_nxv8i8_nxv8f16(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8f16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i16>
@@ -1415,10 +1401,10 @@ define <vscale x 8 x half> @vpgather_baseidx_nxv8f16(ptr %base, <vscale x 8 x i1
;
; RV64-LABEL: vpgather_baseidx_nxv8f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds half, ptr %base, <vscale x 8 x i16> %idxs
@@ -1522,19 +1508,18 @@ define <vscale x 8 x float> @vpgather_nxv8f32(<vscale x 8 x ptr> %ptrs, <vscale
define <vscale x 8 x float> @vpgather_baseidx_nxv8i8_nxv8f32(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <vscale x 8 x i8> %idxs
@@ -1545,19 +1530,18 @@ define <vscale x 8 x float> @vpgather_baseidx_nxv8i8_nxv8f32(ptr %base, <vscale
define <vscale x 8 x float> @vpgather_baseidx_sext_nxv8i8_nxv8f32(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -1569,19 +1553,19 @@ define <vscale x 8 x float> @vpgather_baseidx_sext_nxv8i8_nxv8f32(ptr %base, <vs
define <vscale x 8 x float> @vpgather_baseidx_zext_nxv8i8_nxv8f32(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v10, v8
; RV32-NEXT: vsll.vi v12, v10, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV32-NEXT: vluxei16.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v10, v8
; RV64-NEXT: vsll.vi v12, v10, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei16.v v8, (a0), v12, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -1593,19 +1577,18 @@ define <vscale x 8 x float> @vpgather_baseidx_zext_nxv8i8_nxv8f32(ptr %base, <vs
define <vscale x 8 x float> @vpgather_baseidx_nxv8i16_nxv8f32(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i16_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i16_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <vscale x 8 x i16> %idxs
@@ -1616,19 +1599,18 @@ define <vscale x 8 x float> @vpgather_baseidx_nxv8i16_nxv8f32(ptr %base, <vscale
define <vscale x 8 x float> @vpgather_baseidx_sext_nxv8i16_nxv8f32(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -1640,19 +1622,17 @@ define <vscale x 8 x float> @vpgather_baseidx_sext_nxv8i16_nxv8f32(ptr %base, <v
define <vscale x 8 x float> @vpgather_baseidx_zext_nxv8i16_nxv8f32(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v12, v8
; RV32-NEXT: vsll.vi v8, v12, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v12, v8
; RV64-NEXT: vsll.vi v8, v12, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -1664,18 +1644,17 @@ define <vscale x 8 x float> @vpgather_baseidx_zext_nxv8i16_nxv8f32(ptr %base, <v
define <vscale x 8 x float> @vpgather_baseidx_nxv8f32(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; RV32-NEXT: vsll.vi v8, v8, 2
; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsll.vi v8, v8, 2
; RV32-NEXT: vluxei32.v v8, (a0), v8, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <vscale x 8 x i32> %idxs
@@ -1779,19 +1758,18 @@ define <vscale x 6 x double> @vpgather_nxv6f64(<vscale x 6 x ptr> %ptrs, <vscale
define <vscale x 6 x double> @vpgather_baseidx_nxv6i8_nxv6f64(ptr %base, <vscale x 6 x i8> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv6i8_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv6i8_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i8> %idxs
@@ -1802,19 +1780,18 @@ define <vscale x 6 x double> @vpgather_baseidx_nxv6i8_nxv6f64(ptr %base, <vscale
define <vscale x 6 x double> @vpgather_baseidx_sext_nxv6i8_nxv6f64(ptr %base, <vscale x 6 x i8> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv6i8_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv6i8_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 6 x i8> %idxs to <vscale x 6 x i64>
@@ -1826,19 +1803,19 @@ define <vscale x 6 x double> @vpgather_baseidx_sext_nxv6i8_nxv6f64(ptr %base, <v
define <vscale x 6 x double> @vpgather_baseidx_zext_nxv6i8_nxv6f64(ptr %base, <vscale x 6 x i8> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv6i8_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v10, v8
; RV32-NEXT: vsll.vi v16, v10, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei16.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv6i8_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v10, v8
; RV64-NEXT: vsll.vi v16, v10, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vluxei16.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 6 x i8> %idxs to <vscale x 6 x i64>
@@ -1850,19 +1827,18 @@ define <vscale x 6 x double> @vpgather_baseidx_zext_nxv6i8_nxv6f64(ptr %base, <v
define <vscale x 6 x double> @vpgather_baseidx_nxv6i16_nxv6f64(ptr %base, <vscale x 6 x i16> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv6i16_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv6i16_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i16> %idxs
@@ -1873,19 +1849,18 @@ define <vscale x 6 x double> @vpgather_baseidx_nxv6i16_nxv6f64(ptr %base, <vscal
define <vscale x 6 x double> @vpgather_baseidx_sext_nxv6i16_nxv6f64(ptr %base, <vscale x 6 x i16> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv6i16_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv6i16_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 6 x i16> %idxs to <vscale x 6 x i64>
@@ -1897,19 +1872,19 @@ define <vscale x 6 x double> @vpgather_baseidx_sext_nxv6i16_nxv6f64(ptr %base, <
define <vscale x 6 x double> @vpgather_baseidx_zext_nxv6i16_nxv6f64(ptr %base, <vscale x 6 x i16> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv6i16_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv6i16_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v12, v8
; RV64-NEXT: vsll.vi v16, v12, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 6 x i16> %idxs to <vscale x 6 x i64>
@@ -1921,18 +1896,17 @@ define <vscale x 6 x double> @vpgather_baseidx_zext_nxv6i16_nxv6f64(ptr %base, <
define <vscale x 6 x double> @vpgather_baseidx_nxv6i32_nxv6f64(ptr %base, <vscale x 6 x i32> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv6i32_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv6i32_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i32> %idxs
@@ -1943,18 +1917,17 @@ define <vscale x 6 x double> @vpgather_baseidx_nxv6i32_nxv6f64(ptr %base, <vscal
define <vscale x 6 x double> @vpgather_baseidx_sext_nxv6i32_nxv6f64(ptr %base, <vscale x 6 x i32> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv6i32_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv6i32_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 6 x i32> %idxs to <vscale x 6 x i64>
@@ -1966,18 +1939,17 @@ define <vscale x 6 x double> @vpgather_baseidx_sext_nxv6i32_nxv6f64(ptr %base, <
define <vscale x 6 x double> @vpgather_baseidx_zext_nxv6i32_nxv6f64(ptr %base, <vscale x 6 x i32> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv6i32_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv6i32_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vzext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 6 x i32> %idxs to <vscale x 6 x i64>
@@ -1991,16 +1963,16 @@ define <vscale x 6 x double> @vpgather_baseidx_nxv6f64(ptr %base, <vscale x 6 x
; RV32: # %bb.0:
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v16, v8, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vsll.vi v8, v8, 3
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsll.vi v8, v8, 3
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i64> %idxs
@@ -2030,19 +2002,18 @@ define <vscale x 8 x double> @vpgather_nxv8f64(<vscale x 8 x ptr> %ptrs, <vscale
define <vscale x 8 x double> @vpgather_baseidx_nxv8i8_nxv8f64(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i8_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i8_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i8> %idxs
@@ -2053,19 +2024,18 @@ define <vscale x 8 x double> @vpgather_baseidx_nxv8i8_nxv8f64(ptr %base, <vscale
define <vscale x 8 x double> @vpgather_baseidx_sext_nxv8i8_nxv8f64(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i8_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -2077,19 +2047,19 @@ define <vscale x 8 x double> @vpgather_baseidx_sext_nxv8i8_nxv8f64(ptr %base, <v
define <vscale x 8 x double> @vpgather_baseidx_zext_nxv8i8_nxv8f64(ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v10, v8
; RV32-NEXT: vsll.vi v16, v10, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei16.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i8_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v10, v8
; RV64-NEXT: vsll.vi v16, v10, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vluxei16.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -2101,19 +2071,18 @@ define <vscale x 8 x double> @vpgather_baseidx_zext_nxv8i8_nxv8f64(ptr %base, <v
define <vscale x 8 x double> @vpgather_baseidx_nxv8i16_nxv8f64(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i16_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i16_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i16> %idxs
@@ -2124,19 +2093,18 @@ define <vscale x 8 x double> @vpgather_baseidx_nxv8i16_nxv8f64(ptr %base, <vscal
define <vscale x 8 x double> @vpgather_baseidx_sext_nxv8i16_nxv8f64(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i16_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -2148,19 +2116,19 @@ define <vscale x 8 x double> @vpgather_baseidx_sext_nxv8i16_nxv8f64(ptr %base, <
define <vscale x 8 x double> @vpgather_baseidx_zext_nxv8i16_nxv8f64(ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v12, v8
; RV32-NEXT: vsll.vi v16, v12, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i16_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v12, v8
; RV64-NEXT: vsll.vi v16, v12, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -2172,18 +2140,17 @@ define <vscale x 8 x double> @vpgather_baseidx_zext_nxv8i16_nxv8f64(ptr %base, <
define <vscale x 8 x double> @vpgather_baseidx_nxv8i32_nxv8f64(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_nxv8i32_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8i32_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i32> %idxs
@@ -2194,18 +2161,17 @@ define <vscale x 8 x double> @vpgather_baseidx_nxv8i32_nxv8f64(ptr %base, <vscal
define <vscale x 8 x double> @vpgather_baseidx_sext_nxv8i32_nxv8f64(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_sext_nxv8i32_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_sext_nxv8i32_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -2217,18 +2183,17 @@ define <vscale x 8 x double> @vpgather_baseidx_sext_nxv8i32_nxv8f64(ptr %base, <
define <vscale x 8 x double> @vpgather_baseidx_zext_nxv8i32_nxv8f64(ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpgather_baseidx_zext_nxv8i32_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v8, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_zext_nxv8i32_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vzext.vf2 v16, v8
; RV64-NEXT: vsll.vi v8, v16, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -2242,16 +2207,16 @@ define <vscale x 8 x double> @vpgather_baseidx_nxv8f64(ptr %base, <vscale x 8 x
; RV32: # %bb.0:
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v16, v8, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vluxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpgather_baseidx_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vsll.vi v8, v8, 3
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsll.vi v8, v8, 3
; RV64-NEXT: vluxei64.v v8, (a0), v8, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i64> %idxs
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll
index 59662db42898fc..15f22b746bede2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll
@@ -170,17 +170,17 @@ define void @vpscatter_nxv8i8(<vscale x 8 x i8> %val, <vscale x 8 x ptr> %ptrs,
define void @vpscatter_baseidx_nxv8i8(<vscale x 8 x i8> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v9
-; RV32-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v9
-; RV64-NEXT: vsetvli zero, a1, e8, m1, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e8, m1, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i8, ptr %base, <vscale x 8 x i8> %idxs
@@ -325,19 +325,19 @@ define void @vpscatter_nxv8i16(<vscale x 8 x i16> %val, <vscale x 8 x ptr> %ptrs
define void @vpscatter_baseidx_nxv8i8_nxv8i16(<vscale x 8 x i16> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8_nxv8i16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v10
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8_nxv8i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v10
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i16, ptr %base, <vscale x 8 x i8> %idxs
@@ -348,19 +348,19 @@ define void @vpscatter_baseidx_nxv8i8_nxv8i16(<vscale x 8 x i16> %val, ptr %base
define void @vpscatter_baseidx_sext_nxv8i8_nxv8i16(<vscale x 8 x i16> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8i16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v10
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v10
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i16>
@@ -402,10 +402,10 @@ define void @vpscatter_baseidx_nxv8i16(<vscale x 8 x i16> %val, ptr %base, <vsca
;
; RV64-LABEL: vpscatter_baseidx_nxv8i16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v10
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i16, ptr %base, <vscale x 8 x i16> %idxs
@@ -469,8 +469,9 @@ define void @vpscatter_baseidx_vpsext_nxv8i32_nxv8i16(<vscale x 8 x i16> %val, p
; RV32-NEXT: vsext.vf2 v16, v12, v0.t
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v12, v16, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
@@ -495,8 +496,9 @@ define void @vpscatter_baseidx_vpzext_nxv8i32_nxv8i16(<vscale x 8 x i16> %val, p
; RV32-NEXT: vzext.vf2 v16, v12, v0.t
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v12, v16, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
@@ -625,19 +627,18 @@ define void @vpscatter_nxv8i32(<vscale x 8 x i32> %val, <vscale x 8 x ptr> %ptrs
define void @vpscatter_baseidx_nxv8i8_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <vscale x 8 x i8> %idxs
@@ -648,19 +649,18 @@ define void @vpscatter_baseidx_nxv8i8_nxv8i32(<vscale x 8 x i32> %val, ptr %base
define void @vpscatter_baseidx_sext_nxv8i8_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -672,19 +672,19 @@ define void @vpscatter_baseidx_sext_nxv8i8_nxv8i32(<vscale x 8 x i32> %val, ptr
define void @vpscatter_baseidx_zext_nxv8i8_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v14, v12
; RV32-NEXT: vsll.vi v12, v14, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV32-NEXT: vsoxei16.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v14, v12
; RV64-NEXT: vsll.vi v12, v14, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei16.v v8, (a0), v12, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -696,19 +696,18 @@ define void @vpscatter_baseidx_zext_nxv8i8_nxv8i32(<vscale x 8 x i32> %val, ptr
define void @vpscatter_baseidx_nxv8i16_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i16_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i16_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <vscale x 8 x i16> %idxs
@@ -719,19 +718,18 @@ define void @vpscatter_baseidx_nxv8i16_nxv8i32(<vscale x 8 x i32> %val, ptr %bas
define void @vpscatter_baseidx_sext_nxv8i16_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -743,19 +741,17 @@ define void @vpscatter_baseidx_sext_nxv8i16_nxv8i32(<vscale x 8 x i32> %val, ptr
define void @vpscatter_baseidx_zext_nxv8i16_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v16, v12
; RV64-NEXT: vsll.vi v12, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -767,18 +763,17 @@ define void @vpscatter_baseidx_zext_nxv8i16_nxv8i32(<vscale x 8 x i32> %val, ptr
define void @vpscatter_baseidx_nxv8i32(<vscale x 8 x i32> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; RV32-NEXT: vsll.vi v12, v12, 2
; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsll.vi v12, v12, 2
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i32, ptr %base, <vscale x 8 x i32> %idxs
@@ -877,19 +872,18 @@ define void @vpscatter_nxv8i64(<vscale x 8 x i64> %val, <vscale x 8 x ptr> %ptrs
define void @vpscatter_baseidx_nxv8i8_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i8> %idxs
@@ -900,19 +894,18 @@ define void @vpscatter_baseidx_nxv8i8_nxv8i64(<vscale x 8 x i64> %val, ptr %base
define void @vpscatter_baseidx_sext_nxv8i8_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -924,19 +917,19 @@ define void @vpscatter_baseidx_sext_nxv8i8_nxv8i64(<vscale x 8 x i64> %val, ptr
define void @vpscatter_baseidx_zext_nxv8i8_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v18, v16
; RV32-NEXT: vsll.vi v16, v18, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei16.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v18, v16
; RV64-NEXT: vsll.vi v16, v18, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vsoxei16.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -948,19 +941,18 @@ define void @vpscatter_baseidx_zext_nxv8i8_nxv8i64(<vscale x 8 x i64> %val, ptr
define void @vpscatter_baseidx_nxv8i16_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i16_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i16_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i16> %idxs
@@ -971,19 +963,18 @@ define void @vpscatter_baseidx_nxv8i16_nxv8i64(<vscale x 8 x i64> %val, ptr %bas
define void @vpscatter_baseidx_sext_nxv8i16_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -995,19 +986,19 @@ define void @vpscatter_baseidx_sext_nxv8i16_nxv8i64(<vscale x 8 x i64> %val, ptr
define void @vpscatter_baseidx_zext_nxv8i16_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v20, v16
; RV64-NEXT: vsll.vi v16, v20, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -1019,18 +1010,17 @@ define void @vpscatter_baseidx_zext_nxv8i16_nxv8i64(<vscale x 8 x i64> %val, ptr
define void @vpscatter_baseidx_nxv8i32_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i32_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i32_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i32> %idxs
@@ -1041,18 +1031,17 @@ define void @vpscatter_baseidx_nxv8i32_nxv8i64(<vscale x 8 x i64> %val, ptr %bas
define void @vpscatter_baseidx_sext_nxv8i32_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i32_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i32_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -1064,18 +1053,17 @@ define void @vpscatter_baseidx_sext_nxv8i32_nxv8i64(<vscale x 8 x i64> %val, ptr
define void @vpscatter_baseidx_zext_nxv8i32_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i32_nxv8i64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i32_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vzext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -1089,16 +1077,16 @@ define void @vpscatter_baseidx_nxv8i64(<vscale x 8 x i64> %val, ptr %base, <vsca
; RV32: # %bb.0:
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v24, v16, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v24, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vsll.vi v16, v16, 3
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsll.vi v16, v16, 3
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds i64, ptr %base, <vscale x 8 x i64> %idxs
@@ -1197,19 +1185,19 @@ define void @vpscatter_nxv8f16(<vscale x 8 x half> %val, <vscale x 8 x ptr> %ptr
define void @vpscatter_baseidx_nxv8i8_nxv8f16(<vscale x 8 x half> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8_nxv8f16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v10
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8_nxv8f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v10
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds half, ptr %base, <vscale x 8 x i8> %idxs
@@ -1220,19 +1208,19 @@ define void @vpscatter_baseidx_nxv8i8_nxv8f16(<vscale x 8 x half> %val, ptr %bas
define void @vpscatter_baseidx_sext_nxv8i8_nxv8f16(<vscale x 8 x half> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8f16:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v12, v10
; RV32-NEXT: vadd.vv v12, v12, v12
-; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v10
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i16>
@@ -1274,10 +1262,10 @@ define void @vpscatter_baseidx_nxv8f16(<vscale x 8 x half> %val, ptr %base, <vsc
;
; RV64-LABEL: vpscatter_baseidx_nxv8f16:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v10
; RV64-NEXT: vadd.vv v16, v16, v16
-; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds half, ptr %base, <vscale x 8 x i16> %idxs
@@ -1376,19 +1364,18 @@ define void @vpscatter_nxv8f32(<vscale x 8 x float> %val, <vscale x 8 x ptr> %pt
define void @vpscatter_baseidx_nxv8i8_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <vscale x 8 x i8> %idxs
@@ -1399,19 +1386,18 @@ define void @vpscatter_baseidx_nxv8i8_nxv8f32(<vscale x 8 x float> %val, ptr %ba
define void @vpscatter_baseidx_sext_nxv8i8_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -1423,19 +1409,19 @@ define void @vpscatter_baseidx_sext_nxv8i8_nxv8f32(<vscale x 8 x float> %val, pt
define void @vpscatter_baseidx_zext_nxv8i8_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v14, v12
; RV32-NEXT: vsll.vi v12, v14, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV32-NEXT: vsoxei16.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v14, v12
; RV64-NEXT: vsll.vi v12, v14, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei16.v v8, (a0), v12, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i32>
@@ -1447,19 +1433,18 @@ define void @vpscatter_baseidx_zext_nxv8i8_nxv8f32(<vscale x 8 x float> %val, pt
define void @vpscatter_baseidx_nxv8i16_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i16_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i16_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <vscale x 8 x i16> %idxs
@@ -1470,19 +1455,18 @@ define void @vpscatter_baseidx_nxv8i16_nxv8f32(<vscale x 8 x float> %val, ptr %b
define void @vpscatter_baseidx_sext_nxv8i16_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -1494,19 +1478,17 @@ define void @vpscatter_baseidx_sext_nxv8i16_nxv8f32(<vscale x 8 x float> %val, p
define void @vpscatter_baseidx_zext_nxv8i16_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v16, v12
; RV32-NEXT: vsll.vi v12, v16, 2
-; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v16, v12
; RV64-NEXT: vsll.vi v12, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i32>
@@ -1518,18 +1500,17 @@ define void @vpscatter_baseidx_zext_nxv8i16_nxv8f32(<vscale x 8 x float> %val, p
define void @vpscatter_baseidx_nxv8f32(<vscale x 8 x float> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8f32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
-; RV32-NEXT: vsll.vi v12, v12, 2
; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV32-NEXT: vsll.vi v12, v12, 2
; RV32-NEXT: vsoxei32.v v8, (a0), v12, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8f32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v16, v12
; RV64-NEXT: vsll.vi v16, v16, 2
-; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds float, ptr %base, <vscale x 8 x i32> %idxs
@@ -1628,19 +1609,18 @@ define void @vpscatter_nxv6f64(<vscale x 6 x double> %val, <vscale x 6 x ptr> %p
define void @vpscatter_baseidx_nxv6i8_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i8> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv6i8_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv6i8_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i8> %idxs
@@ -1651,19 +1631,18 @@ define void @vpscatter_baseidx_nxv6i8_nxv6f64(<vscale x 6 x double> %val, ptr %b
define void @vpscatter_baseidx_sext_nxv6i8_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i8> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv6i8_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv6i8_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 6 x i8> %idxs to <vscale x 6 x i64>
@@ -1675,19 +1654,19 @@ define void @vpscatter_baseidx_sext_nxv6i8_nxv6f64(<vscale x 6 x double> %val, p
define void @vpscatter_baseidx_zext_nxv6i8_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i8> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv6i8_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v18, v16
; RV32-NEXT: vsll.vi v16, v18, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei16.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv6i8_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v18, v16
; RV64-NEXT: vsll.vi v16, v18, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vsoxei16.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 6 x i8> %idxs to <vscale x 6 x i64>
@@ -1699,19 +1678,18 @@ define void @vpscatter_baseidx_zext_nxv6i8_nxv6f64(<vscale x 6 x double> %val, p
define void @vpscatter_baseidx_nxv6i16_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i16> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv6i16_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv6i16_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i16> %idxs
@@ -1722,19 +1700,18 @@ define void @vpscatter_baseidx_nxv6i16_nxv6f64(<vscale x 6 x double> %val, ptr %
define void @vpscatter_baseidx_sext_nxv6i16_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i16> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv6i16_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv6i16_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 6 x i16> %idxs to <vscale x 6 x i64>
@@ -1746,19 +1723,19 @@ define void @vpscatter_baseidx_sext_nxv6i16_nxv6f64(<vscale x 6 x double> %val,
define void @vpscatter_baseidx_zext_nxv6i16_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i16> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv6i16_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv6i16_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v20, v16
; RV64-NEXT: vsll.vi v16, v20, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 6 x i16> %idxs to <vscale x 6 x i64>
@@ -1770,18 +1747,17 @@ define void @vpscatter_baseidx_zext_nxv6i16_nxv6f64(<vscale x 6 x double> %val,
define void @vpscatter_baseidx_nxv6i32_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i32> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv6i32_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv6i32_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i32> %idxs
@@ -1792,18 +1768,17 @@ define void @vpscatter_baseidx_nxv6i32_nxv6f64(<vscale x 6 x double> %val, ptr %
define void @vpscatter_baseidx_sext_nxv6i32_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i32> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv6i32_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv6i32_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 6 x i32> %idxs to <vscale x 6 x i64>
@@ -1815,18 +1790,17 @@ define void @vpscatter_baseidx_sext_nxv6i32_nxv6f64(<vscale x 6 x double> %val,
define void @vpscatter_baseidx_zext_nxv6i32_nxv6f64(<vscale x 6 x double> %val, ptr %base, <vscale x 6 x i32> %idxs, <vscale x 6 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv6i32_nxv6f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv6i32_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vzext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 6 x i32> %idxs to <vscale x 6 x i64>
@@ -1840,16 +1814,16 @@ define void @vpscatter_baseidx_nxv6f64(<vscale x 6 x double> %val, ptr %base, <v
; RV32: # %bb.0:
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v24, v16, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v24, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv6f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vsll.vi v16, v16, 3
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsll.vi v16, v16, 3
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 6 x i64> %idxs
@@ -1878,19 +1852,18 @@ define void @vpscatter_nxv8f64(<vscale x 8 x double> %val, <vscale x 8 x ptr> %p
define void @vpscatter_baseidx_nxv8i8_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i8_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i8_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i8> %idxs
@@ -1901,19 +1874,18 @@ define void @vpscatter_baseidx_nxv8i8_nxv8f64(<vscale x 8 x double> %val, ptr %b
define void @vpscatter_baseidx_sext_nxv8i8_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf4 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i8_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf8 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -1925,19 +1897,19 @@ define void @vpscatter_baseidx_sext_nxv8i8_nxv8f64(<vscale x 8 x double> %val, p
define void @vpscatter_baseidx_zext_nxv8i8_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i8> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV32-NEXT: vzext.vf2 v18, v16
; RV32-NEXT: vsll.vi v16, v18, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei16.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i8_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; RV64-NEXT: vzext.vf2 v18, v16
; RV64-NEXT: vsll.vi v16, v18, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vsoxei16.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i8> %idxs to <vscale x 8 x i64>
@@ -1949,19 +1921,18 @@ define void @vpscatter_baseidx_zext_nxv8i8_nxv8f64(<vscale x 8 x double> %val, p
define void @vpscatter_baseidx_nxv8i16_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i16_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i16_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i16> %idxs
@@ -1972,19 +1943,18 @@ define void @vpscatter_baseidx_nxv8i16_nxv8f64(<vscale x 8 x double> %val, ptr %
define void @vpscatter_baseidx_sext_nxv8i16_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i16_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf4 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -1996,19 +1966,19 @@ define void @vpscatter_baseidx_sext_nxv8i16_nxv8f64(<vscale x 8 x double> %val,
define void @vpscatter_baseidx_zext_nxv8i16_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i16> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vzext.vf2 v20, v16
; RV32-NEXT: vsll.vi v16, v20, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i16_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV64-NEXT: vzext.vf2 v20, v16
; RV64-NEXT: vsll.vi v16, v20, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV64-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i16> %idxs to <vscale x 8 x i64>
@@ -2020,18 +1990,17 @@ define void @vpscatter_baseidx_zext_nxv8i16_nxv8f64(<vscale x 8 x double> %val,
define void @vpscatter_baseidx_nxv8i32_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_nxv8i32_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8i32_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i32> %idxs
@@ -2042,18 +2011,17 @@ define void @vpscatter_baseidx_nxv8i32_nxv8f64(<vscale x 8 x double> %val, ptr %
define void @vpscatter_baseidx_sext_nxv8i32_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_sext_nxv8i32_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_sext_nxv8i32_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = sext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -2065,18 +2033,17 @@ define void @vpscatter_baseidx_sext_nxv8i32_nxv8f64(<vscale x 8 x double> %val,
define void @vpscatter_baseidx_zext_nxv8i32_nxv8f64(<vscale x 8 x double> %val, ptr %base, <vscale x 8 x i32> %idxs, <vscale x 8 x i1> %m, i32 zeroext %evl) {
; RV32-LABEL: vpscatter_baseidx_zext_nxv8i32_nxv8f64:
; RV32: # %bb.0:
-; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v16, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_zext_nxv8i32_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
+; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vzext.vf2 v24, v16
; RV64-NEXT: vsll.vi v16, v24, 3
-; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%eidxs = zext <vscale x 8 x i32> %idxs to <vscale x 8 x i64>
@@ -2090,16 +2057,16 @@ define void @vpscatter_baseidx_nxv8f64(<vscale x 8 x double> %val, ptr %base, <v
; RV32: # %bb.0:
; RV32-NEXT: vsetvli a2, zero, e32, m4, ta, ma
; RV32-NEXT: vnsrl.wi v24, v16, 0
+; RV32-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; RV32-NEXT: vsll.vi v16, v24, 3
-; RV32-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
; RV32-NEXT: vsoxei32.v v8, (a0), v16, v0.t
; RV32-NEXT: ret
;
; RV64-LABEL: vpscatter_baseidx_nxv8f64:
; RV64: # %bb.0:
-; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
-; RV64-NEXT: vsll.vi v16, v16, 3
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
+; RV64-NEXT: vsll.vi v16, v16, 3
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
; RV64-NEXT: ret
%ptrs = getelementptr inbounds double, ptr %base, <vscale x 8 x i64> %idxs
diff --git a/llvm/test/CodeGen/RISCV/rvv/vrem-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vrem-vp.ll
index 2ef96f4b3896fc..9304e8c58f90db 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vrem-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vrem-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vrem_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <v
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v8, v8, 1, v0.t
; CHECK-NEXT: vsra.vi v8, v8, 1, v0.t
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v9, v9, 1, v0.t
; CHECK-NEXT: vsra.vi v9, v9, 1, v0.t
; CHECK-NEXT: vrem.vv v8, v8, v9, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/vremu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vremu-vp.ll
index 1f1ed4a1269acb..1d4ee06cb9ac85 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vremu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vremu-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vremu_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <
; CHECK-NEXT: li a2, 127
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v8, v8, a2, v0.t
-; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a2, v0.t
; CHECK-NEXT: vremu.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
index 027c81180d5f19..f28153d427ebfe 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
@@ -1125,7 +1125,6 @@ exit:
define <vscale x 4 x i32> @clobbered_forwarded_avl(i64 %n, <vscale x 4 x i32> %v, i1 %cmp) {
; CHECK-LABEL: clobbered_forwarded_avl:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: mv a2, a0
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: andi a1, a1, 1
; CHECK-NEXT: .LBB27_1: # %for.body
@@ -1133,9 +1132,7 @@ define <vscale x 4 x i32> @clobbered_forwarded_avl(i64 %n, <vscale x 4 x i32> %v
; CHECK-NEXT: addi a0, a0, 1
; CHECK-NEXT: bnez a1, .LBB27_1
; CHECK-NEXT: # %bb.2: # %for.cond.cleanup
-; CHECK-NEXT: vsetvli a0, zero, e32, m2, ta, ma
; CHECK-NEXT: vadd.vv v10, v8, v8
-; CHECK-NEXT: vsetvli zero, a2, e32, m2, ta, ma
; CHECK-NEXT: vadd.vv v8, v10, v8
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rvv/vshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vshl-vp.ll
index 380835494ed17d..f5c46aec86b864 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vshl-vp.ll
@@ -9,10 +9,9 @@ declare <vscale x 8 x i7> @llvm.vp.shl.nxv8i7(<vscale x 8 x i7>, <vscale x 8 x i
define <vscale x 8 x i7> @vsll_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <vscale x 8 x i1> %mask, i32 zeroext %evl) {
; CHECK-LABEL: vsll_vx_nxv8i7:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: li a0, 127
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a0, v0.t
; CHECK-NEXT: vsll.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsra-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsra-vp.ll
index cff8cc710d21f3..ecce91982b14a4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsra-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsra-vp.ll
@@ -12,10 +12,8 @@ define <vscale x 8 x i7> @vsra_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <v
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vsll.vi v8, v8, 1, v0.t
; CHECK-NEXT: vsra.vi v8, v8, 1, v0.t
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: li a0, 127
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a0, v0.t
; CHECK-NEXT: vsra.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsrl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsrl-vp.ll
index ff6771b643031f..9431b9abc91fbf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsrl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsrl-vp.ll
@@ -12,9 +12,7 @@ define <vscale x 8 x i7> @vsrl_vx_nxv8i7(<vscale x 8 x i7> %a, i7 signext %b, <v
; CHECK-NEXT: li a2, 127
; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v8, v8, a2, v0.t
-; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vand.vx v9, v9, a2, v0.t
; CHECK-NEXT: vsrl.vv v8, v8, v9, v0.t
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll
index 613b58b0f1b88a..bab66aad8c7e2a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll
@@ -62,9 +62,8 @@ define <vscale x 1 x i8> @vssub_vx_nxv1i8(<vscale x 1 x i8> %va, i8 %b, <vscale
define <vscale x 1 x i8> @vssub_vx_nxv1i8_commute(<vscale x 1 x i8> %va, i8 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssub_vx_nxv1i8_commute:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf8, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vssub.vv v8, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i8> poison, i8 %b, i32 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll
index 8c729d7d9bfb6e..ca56145260f51c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll
@@ -60,9 +60,8 @@ define <vscale x 1 x i8> @vssubu_vx_nxv1i8(<vscale x 1 x i8> %va, i8 %b, <vscale
define <vscale x 1 x i8> @vssubu_vx_nxv1i8_commute(<vscale x 1 x i8> %va, i8 %b, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vssubu_vx_nxv1i8_commute:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf8, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vssubu.vv v8, v9, v8, v0.t
; CHECK-NEXT: ret
%elt.head = insertelement <vscale x 1 x i8> poison, i8 %b, i32 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vwsll-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vwsll-vp.ll
index bb3076b3a945e8..c30c4763dd46d5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vwsll-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vwsll-vp.ll
@@ -13,10 +13,9 @@ declare <vscale x 2 x i64> @llvm.vp.shl.nxv2i64(<vscale x 2 x i64>, <vscale x 2
define <vscale x 2 x i64> @vwsll_vv_nxv2i64_sext(<vscale x 2 x i32> %a, <vscale x 2 x i32> %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vv_nxv2i64_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e64, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -35,10 +34,9 @@ define <vscale x 2 x i64> @vwsll_vv_nxv2i64_sext(<vscale x 2 x i32> %a, <vscale
define <vscale x 2 x i64> @vwsll_vv_nxv2i64_zext(<vscale x 2 x i32> %a, <vscale x 2 x i32> %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vv_nxv2i64_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e64, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -57,17 +55,15 @@ define <vscale x 2 x i64> @vwsll_vv_nxv2i64_zext(<vscale x 2 x i32> %a, <vscale
define <vscale x 2 x i64> @vwsll_vx_i64_nxv2i64(<vscale x 2 x i32> %a, i64 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-RV32-LABEL: vwsll_vx_i64_nxv2i64:
; CHECK-RV32: # %bb.0:
-; CHECK-RV32-NEXT: vsetvli a1, zero, e64, m2, ta, ma
-; CHECK-RV32-NEXT: vzext.vf2 v10, v8
; CHECK-RV32-NEXT: vsetvli zero, a2, e64, m2, ta, ma
+; CHECK-RV32-NEXT: vzext.vf2 v10, v8
; CHECK-RV32-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-RV32-NEXT: ret
;
; CHECK-RV64-LABEL: vwsll_vx_i64_nxv2i64:
; CHECK-RV64: # %bb.0:
-; CHECK-RV64-NEXT: vsetvli a2, zero, e64, m2, ta, ma
-; CHECK-RV64-NEXT: vzext.vf2 v10, v8
; CHECK-RV64-NEXT: vsetvli zero, a1, e64, m2, ta, ma
+; CHECK-RV64-NEXT: vzext.vf2 v10, v8
; CHECK-RV64-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-RV64-NEXT: ret
;
@@ -94,12 +90,11 @@ define <vscale x 2 x i64> @vwsll_vx_i64_nxv2i64(<vscale x 2 x i32> %a, i64 %b, <
define <vscale x 2 x i64> @vwsll_vx_i32_nxv2i64_sext(<vscale x 2 x i32> %a, i32 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i32_nxv2i64_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -120,12 +115,11 @@ define <vscale x 2 x i64> @vwsll_vx_i32_nxv2i64_sext(<vscale x 2 x i32> %a, i32
define <vscale x 2 x i64> @vwsll_vx_i32_nxv2i64_zext(<vscale x 2 x i32> %a, i32 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i32_nxv2i64_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e32, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -146,12 +140,11 @@ define <vscale x 2 x i64> @vwsll_vx_i32_nxv2i64_zext(<vscale x 2 x i32> %a, i32
define <vscale x 2 x i64> @vwsll_vx_i16_nxv2i64_sext(<vscale x 2 x i32> %a, i16 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i16_nxv2i64_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf4 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -172,12 +165,11 @@ define <vscale x 2 x i64> @vwsll_vx_i16_nxv2i64_sext(<vscale x 2 x i32> %a, i16
define <vscale x 2 x i64> @vwsll_vx_i16_nxv2i64_zext(<vscale x 2 x i32> %a, i16 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i16_nxv2i64_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e16, mf2, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf4 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -198,12 +190,11 @@ define <vscale x 2 x i64> @vwsll_vx_i16_nxv2i64_zext(<vscale x 2 x i32> %a, i16
define <vscale x 2 x i64> @vwsll_vx_i8_nxv2i64_sext(<vscale x 2 x i32> %a, i8 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i8_nxv2i64_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf8 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -224,12 +215,11 @@ define <vscale x 2 x i64> @vwsll_vx_i8_nxv2i64_sext(<vscale x 2 x i32> %a, i8 %b
define <vscale x 2 x i64> @vwsll_vx_i8_nxv2i64_zext(<vscale x 2 x i32> %a, i8 %b, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i8_nxv2i64_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf4, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e64, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf8 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e64, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -250,9 +240,8 @@ define <vscale x 2 x i64> @vwsll_vx_i8_nxv2i64_zext(<vscale x 2 x i32> %a, i8 %b
define <vscale x 2 x i64> @vwsll_vi_nxv2i64(<vscale x 2 x i32> %a, <vscale x 2 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vi_nxv2i64:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e64, m2, ta, ma
-; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsetvli zero, a0, e64, m2, ta, ma
+; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsll.vi v8, v10, 2, v0.t
; CHECK-NEXT: ret
;
@@ -276,10 +265,9 @@ declare <vscale x 4 x i32> @llvm.vp.shl.nxv4i32(<vscale x 4 x i32>, <vscale x 4
define <vscale x 4 x i32> @vwsll_vv_nxv4i32_sext(<vscale x 4 x i16> %a, <vscale x 4 x i16> %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vv_nxv4i32_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -298,10 +286,9 @@ define <vscale x 4 x i32> @vwsll_vv_nxv4i32_sext(<vscale x 4 x i16> %a, <vscale
define <vscale x 4 x i32> @vwsll_vv_nxv4i32_zext(<vscale x 4 x i16> %a, <vscale x 4 x i16> %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vv_nxv4i32_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e32, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -320,17 +307,15 @@ define <vscale x 4 x i32> @vwsll_vv_nxv4i32_zext(<vscale x 4 x i16> %a, <vscale
define <vscale x 4 x i32> @vwsll_vx_i64_nxv4i32(<vscale x 4 x i16> %a, i64 %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-RV32-LABEL: vwsll_vx_i64_nxv4i32:
; CHECK-RV32: # %bb.0:
-; CHECK-RV32-NEXT: vsetvli a1, zero, e32, m2, ta, ma
-; CHECK-RV32-NEXT: vzext.vf2 v10, v8
; CHECK-RV32-NEXT: vsetvli zero, a2, e32, m2, ta, ma
+; CHECK-RV32-NEXT: vzext.vf2 v10, v8
; CHECK-RV32-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-RV32-NEXT: ret
;
; CHECK-RV64-LABEL: vwsll_vx_i64_nxv4i32:
; CHECK-RV64: # %bb.0:
-; CHECK-RV64-NEXT: vsetvli a2, zero, e32, m2, ta, ma
-; CHECK-RV64-NEXT: vzext.vf2 v10, v8
; CHECK-RV64-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-RV64-NEXT: vzext.vf2 v10, v8
; CHECK-RV64-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-RV64-NEXT: ret
;
@@ -358,9 +343,8 @@ define <vscale x 4 x i32> @vwsll_vx_i64_nxv4i32(<vscale x 4 x i16> %a, i64 %b, <
define <vscale x 4 x i32> @vwsll_vx_i32_nxv4i32(<vscale x 4 x i16> %a, i32 %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i32_nxv4i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e32, m2, ta, ma
-; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
+; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-NEXT: ret
;
@@ -380,12 +364,11 @@ define <vscale x 4 x i32> @vwsll_vx_i32_nxv4i32(<vscale x 4 x i16> %a, i32 %b, <
define <vscale x 4 x i32> @vwsll_vx_i16_nxv4i32_sext(<vscale x 4 x i16> %a, i16 %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i16_nxv4i32_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e16, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -406,12 +389,11 @@ define <vscale x 4 x i32> @vwsll_vx_i16_nxv4i32_sext(<vscale x 4 x i16> %a, i16
define <vscale x 4 x i32> @vwsll_vx_i16_nxv4i32_zext(<vscale x 4 x i16> %a, i16 %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i16_nxv4i32_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e16, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -432,12 +414,11 @@ define <vscale x 4 x i32> @vwsll_vx_i16_nxv4i32_zext(<vscale x 4 x i16> %a, i16
define <vscale x 4 x i32> @vwsll_vx_i8_nxv4i32_sext(<vscale x 4 x i16> %a, i8 %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i8_nxv4i32_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf4 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -458,12 +439,11 @@ define <vscale x 4 x i32> @vwsll_vx_i8_nxv4i32_sext(<vscale x 4 x i16> %a, i8 %b
define <vscale x 4 x i32> @vwsll_vx_i8_nxv4i32_zext(<vscale x 4 x i16> %a, i8 %b, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i8_nxv4i32_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, mf2, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf4 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -484,9 +464,8 @@ define <vscale x 4 x i32> @vwsll_vx_i8_nxv4i32_zext(<vscale x 4 x i16> %a, i8 %b
define <vscale x 4 x i32> @vwsll_vi_nxv4i32(<vscale x 4 x i16> %a, <vscale x 4 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vi_nxv4i32:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e32, m2, ta, ma
-; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, ta, ma
+; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsll.vi v8, v10, 2, v0.t
; CHECK-NEXT: ret
;
@@ -511,10 +490,9 @@ declare <vscale x 8 x i16> @llvm.vp.shl.nxv8i16(<vscale x 8 x i16>, <vscale x 8
define <vscale x 8 x i16> @vwsll_vv_nxv8i16_sext(<vscale x 8 x i8> %a, <vscale x 8 x i8> %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vv_nxv8i16_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -533,10 +511,9 @@ define <vscale x 8 x i16> @vwsll_vv_nxv8i16_sext(<vscale x 8 x i8> %a, <vscale x
define <vscale x 8 x i16> @vwsll_vv_nxv8i16_zext(<vscale x 8 x i8> %a, <vscale x 8 x i8> %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vv_nxv8i16_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -555,17 +532,15 @@ define <vscale x 8 x i16> @vwsll_vv_nxv8i16_zext(<vscale x 8 x i8> %a, <vscale x
define <vscale x 8 x i16> @vwsll_vx_i64_nxv8i16(<vscale x 8 x i8> %a, i64 %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-RV32-LABEL: vwsll_vx_i64_nxv8i16:
; CHECK-RV32: # %bb.0:
-; CHECK-RV32-NEXT: vsetvli a1, zero, e16, m2, ta, ma
-; CHECK-RV32-NEXT: vzext.vf2 v10, v8
; CHECK-RV32-NEXT: vsetvli zero, a2, e16, m2, ta, ma
+; CHECK-RV32-NEXT: vzext.vf2 v10, v8
; CHECK-RV32-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-RV32-NEXT: ret
;
; CHECK-RV64-LABEL: vwsll_vx_i64_nxv8i16:
; CHECK-RV64: # %bb.0:
-; CHECK-RV64-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; CHECK-RV64-NEXT: vzext.vf2 v10, v8
; CHECK-RV64-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-RV64-NEXT: vzext.vf2 v10, v8
; CHECK-RV64-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-RV64-NEXT: ret
;
@@ -593,9 +568,8 @@ define <vscale x 8 x i16> @vwsll_vx_i64_nxv8i16(<vscale x 8 x i8> %a, i64 %b, <v
define <vscale x 8 x i16> @vwsll_vx_i32_nxv8i16(<vscale x 8 x i8> %a, i32 %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i32_nxv8i16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-NEXT: ret
;
@@ -616,9 +590,8 @@ define <vscale x 8 x i16> @vwsll_vx_i32_nxv8i16(<vscale x 8 x i8> %a, i32 %b, <v
define <vscale x 8 x i16> @vwsll_vx_i16_nxv8i16(<vscale x 8 x i8> %a, i16 %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i16_nxv8i16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e16, m2, ta, ma
-; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
+; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsll.vx v8, v10, a0, v0.t
; CHECK-NEXT: ret
;
@@ -638,12 +611,11 @@ define <vscale x 8 x i16> @vwsll_vx_i16_nxv8i16(<vscale x 8 x i8> %a, i16 %b, <v
define <vscale x 8 x i16> @vwsll_vx_i8_nxv8i16_sext(<vscale x 8 x i8> %a, i8 %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i8_nxv8i16_sext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -664,12 +636,11 @@ define <vscale x 8 x i16> @vwsll_vx_i8_nxv8i16_sext(<vscale x 8 x i8> %a, i8 %b,
define <vscale x 8 x i16> @vwsll_vx_i8_nxv8i16_zext(<vscale x 8 x i8> %a, i8 %b, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vx_i8_nxv8i16_zext:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
+; CHECK-NEXT: vsetvli zero, a1, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v9, a0
; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma
; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vzext.vf2 v12, v9
-; CHECK-NEXT: vsetvli zero, a1, e16, m2, ta, ma
; CHECK-NEXT: vsll.vv v8, v10, v12, v0.t
; CHECK-NEXT: ret
;
@@ -690,9 +661,8 @@ define <vscale x 8 x i16> @vwsll_vx_i8_nxv8i16_zext(<vscale x 8 x i8> %a, i8 %b,
define <vscale x 8 x i16> @vwsll_vi_nxv8i16(<vscale x 8 x i8> %a, <vscale x 8 x i1> %m, i32 zeroext %vl) {
; CHECK-LABEL: vwsll_vi_nxv8i16:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
-; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma
+; CHECK-NEXT: vzext.vf2 v10, v8
; CHECK-NEXT: vsll.vi v8, v10, 2, v0.t
; CHECK-NEXT: ret
;
More information about the llvm-commits
mailing list