[llvm] [RISCV] Introduce VLOptimizer pass (PR #108640)

via llvm-commits llvm-commits at lists.llvm.org
Fri Sep 13 13:06:34 PDT 2024


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-backend-risc-v

Author: Michael Maitland (michaelmaitland)

<details>
<summary>Changes</summary>

The purpose of this optimization is to make the VL argument, for instructions that have a VL argument, as small as possible. This is implemented by visiting each instruction in reverse order and checking that if it has a VL argument, whether the VL can be reduced.

By putting this pass before VSETVLI insertion, we see three kinds of changes to generated code:
1. Eliminate VSETVLI instructions
2. Reduce the VL toggle on VSETVLI instructions that also change vtype
3. Reduce the VL set by a VSETVLI instruction

The list of supported instructions is currently whitelisted for safety. In the future, we could add more instructions to `isSupportedInstr` to support even more VL optimization.

We originally wrote this pass because vector GEP instructions do not take a VL, which leads us to emit code that uses VL=VLMAX to implement GEP in the RISC-V backend. As a result, some of the vector instructions will write to lanes, specifically between the intended VL and VLMAX, that will never be read. As an alternative to this pass, we considered adding a vector predicated GEP instruction, but this would not fit well into the intrinsic type system since GEP has a variable number of arguments, each with arbitrary types. The second approach we considered was to put this pass after VSETVLI insertion, but we found that it was more difficult to recognize optimization opportunities, especially across basic block boundaries -- the data flow analysis was also a bit more expensive and complex.

While this pass solves the GEP problem, we have expanded it to handle more cases of VL optimization, and there is opportunity for the analysis to be improved to enable even more optimization.

In our downstream compiler, this was able to optimize > 5000 VLs on spec2006 and > 6000 VLs on spec2017. These measurements were using `force-tail-folding-style=data-with-evl` during vectorization. I don't have numbers for the upstream compiler.

---

Patch is 242.21 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/108640.diff


29 Files Affected:

- (modified) llvm/lib/Target/RISCV/CMakeLists.txt (+1) 
- (modified) llvm/lib/Target/RISCV/RISCV.h (+4) 
- (modified) llvm/lib/Target/RISCV/RISCVTargetMachine.cpp (+8-1) 
- (added) llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp (+1468) 
- (modified) llvm/test/CodeGen/RISCV/O3-pipeline.ll (+2-1) 
- (modified) llvm/test/CodeGen/RISCV/rvv/narrow-shift-extend.ll (+27-27) 
- (modified) llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll (+18-36) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vdiv-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vdivu-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll (+14-28) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vfwmsac-vp.ll (+12-24) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll (+15-30) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll (+15-30) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vmax-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vmaxu-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vmin-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vminu-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vmul-vp.ll (+3-4) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vpgather-sdnode.ll (+174-209) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll (+166-199) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vrem-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vremu-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll (-3) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vshl-vp.ll (+1-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vsra-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vsrl-vp.ll (-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vssub-vp.ll (+1-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vssubu-vp.ll (+1-2) 
- (modified) llvm/test/CodeGen/RISCV/rvv/vwsll-vp.ll (+30-60) 


``````````diff
diff --git a/llvm/lib/Target/RISCV/CMakeLists.txt b/llvm/lib/Target/RISCV/CMakeLists.txt
index 124bc239451aed..46ef94b4dd2931 100644
--- a/llvm/lib/Target/RISCV/CMakeLists.txt
+++ b/llvm/lib/Target/RISCV/CMakeLists.txt
@@ -59,6 +59,7 @@ add_llvm_target(RISCVCodeGen
   RISCVTargetObjectFile.cpp
   RISCVTargetTransformInfo.cpp
   RISCVVectorPeephole.cpp
+  RISCVVLOptimizer.cpp
   GISel/RISCVCallLowering.cpp
   GISel/RISCVInstructionSelector.cpp
   GISel/RISCVLegalizerInfo.cpp
diff --git a/llvm/lib/Target/RISCV/RISCV.h b/llvm/lib/Target/RISCV/RISCV.h
index 5a94ada8f8dd46..651119d10d1119 100644
--- a/llvm/lib/Target/RISCV/RISCV.h
+++ b/llvm/lib/Target/RISCV/RISCV.h
@@ -99,6 +99,10 @@ void initializeRISCVO0PreLegalizerCombinerPass(PassRegistry &);
 
 FunctionPass *createRISCVPreLegalizerCombiner();
 void initializeRISCVPreLegalizerCombinerPass(PassRegistry &);
+
+FunctionPass *createRISCVVLOptimizerPass();
+void initializeRISCVVLOptimizerPass(PassRegistry &);
+
 } // namespace llvm
 
 #endif
diff --git a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
index 794df2212dfa53..bfedc6b177332e 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
@@ -103,6 +103,10 @@ static cl::opt<bool> EnableVSETVLIAfterRVVRegAlloc(
     cl::desc("Insert vsetvls after vector register allocation"),
     cl::init(true));
 
+static cl::opt<bool> EnableVLOptimizer("riscv-enable-vloptimizer",
+                                       cl::desc("Enable the VL Optimizer pass"),
+                                       cl::init(true), cl::Hidden);
+
 extern "C" LLVM_EXTERNAL_VISIBILITY void LLVMInitializeRISCVTarget() {
   RegisterTargetMachine<RISCVTargetMachine> X(getTheRISCV32Target());
   RegisterTargetMachine<RISCVTargetMachine> Y(getTheRISCV64Target());
@@ -550,8 +554,11 @@ void RISCVPassConfig::addMachineSSAOptimization() {
 
 void RISCVPassConfig::addPreRegAlloc() {
   addPass(createRISCVPreRAExpandPseudoPass());
-  if (TM->getOptLevel() != CodeGenOptLevel::None)
+  if (TM->getOptLevel() != CodeGenOptLevel::None) {
     addPass(createRISCVMergeBaseOffsetOptPass());
+    if (EnableVLOptimizer)
+      addPass(createRISCVVLOptimizerPass());
+  }
 
   addPass(createRISCVInsertReadWriteCSRPass());
   addPass(createRISCVInsertWriteVXRMPass());
diff --git a/llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp b/llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp
new file mode 100644
index 00000000000000..d25db9b38bec44
--- /dev/null
+++ b/llvm/lib/Target/RISCV/RISCVVLOptimizer.cpp
@@ -0,0 +1,1468 @@
+//===-------------- RISCVVLOptimizer.cpp - VL Optimizer ------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===---------------------------------------------------------------------===//
+//
+// This pass reduces the VL where possible at the MI level, before VSETVLI
+// instructions are inserted.
+//
+// The purpose of this optimization is to make the VL argument, for instructions
+// that have a VL argument, as small as possible. This is implemented by
+// visiting each instruction in reverse order and checking that if it has a VL
+// argument, whether the VL can be reduced.
+//
+//===---------------------------------------------------------------------===//
+
+#include "RISCV.h"
+#include "RISCVMachineFunctionInfo.h"
+#include "RISCVSubtarget.h"
+#include "llvm/ADT/SetVector.h"
+#include "llvm/CodeGen/MachineDominators.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/InitializePasses.h"
+
+#include <algorithm>
+
+using namespace llvm;
+
+#define DEBUG_TYPE "riscv-vl-optimizer"
+
+namespace {
+
+class RISCVVLOptimizer : public MachineFunctionPass {
+  const MachineRegisterInfo *MRI;
+  const MachineDominatorTree *MDT;
+
+public:
+  static char ID;
+
+  RISCVVLOptimizer() : MachineFunctionPass(ID) {
+    initializeRISCVVLOptimizerPass(*PassRegistry::getPassRegistry());
+  }
+
+  bool runOnMachineFunction(MachineFunction &MF) override;
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    AU.addRequired<MachineDominatorTreeWrapperPass>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
+
+  StringRef getPassName() const override { return "RISC-V VL Optimizer"; }
+
+private:
+  bool tryReduceVL(MachineInstr &MI);
+};
+
+} // end anonymous namespace
+
+char RISCVVLOptimizer::ID = 0;
+INITIALIZE_PASS_BEGIN(RISCVVLOptimizer, DEBUG_TYPE, "RISC-V VL Optimizer",
+                      false, false)
+INITIALIZE_PASS_DEPENDENCY(MachineDominatorTreeWrapperPass)
+INITIALIZE_PASS_END(RISCVVLOptimizer, DEBUG_TYPE, "RISC-V VL Optimizer", false,
+                    false)
+
+FunctionPass *llvm::createRISCVVLOptimizerPass() {
+  return new RISCVVLOptimizer();
+}
+
+/// Return true if R is a physical or virtual vector register, false otherwise.
+static bool isVectorRegClass(Register R, const MachineRegisterInfo *MRI) {
+  if (R.isPhysical())
+    return RISCV::VRRegClass.contains(R);
+  const TargetRegisterClass *RC = MRI->getRegClass(R);
+  return RISCV::VRRegClass.hasSubClassEq(RC) ||
+         RISCV::VRM2RegClass.hasSubClassEq(RC) ||
+         RISCV::VRM4RegClass.hasSubClassEq(RC) ||
+         RISCV::VRM8RegClass.hasSubClassEq(RC);
+}
+
+/// Represents the EMUL and EEW of a MachineOperand.
+struct OperandInfo {
+  enum class State {
+    Unknown,
+    Known,
+  } S;
+
+  // Represent as 1,2,4,8, ... and fractional indicator. This is because
+  // EMUL can take on values that don't map to RISCVII::VLMUL values exactly.
+  // For example, a mask operand can have an EMUL less than MF8.
+  std::pair<unsigned, bool> EMUL;
+
+  unsigned Log2EEW;
+
+  OperandInfo(RISCVII::VLMUL EMUL, unsigned Log2EEW)
+      : S(State::Known), EMUL(RISCVVType::decodeVLMUL(EMUL)), Log2EEW(Log2EEW) {
+  }
+
+  OperandInfo(std::pair<unsigned, bool> EMUL, unsigned Log2EEW)
+      : S(State::Known), EMUL(EMUL), Log2EEW(Log2EEW) {}
+
+  OperandInfo(State S) : S(S) {
+    assert(S != State::Known &&
+           "This constructor may only be used to construct "
+           "an Unknown OperandInfo");
+  }
+
+  bool isUnknown() { return S == State::Unknown; }
+  bool isKnown() { return S == State::Known; }
+
+  static bool EMULAndEEWAreEqual(OperandInfo A, OperandInfo B) {
+    assert(A.isKnown() && B.isKnown() && "Both operands must be known");
+    return A.Log2EEW == B.Log2EEW && A.EMUL.first == B.EMUL.first &&
+           A.EMUL.second == B.EMUL.second;
+  }
+};
+
+/// Return the RISCVII::VLMUL that is two times VLMul.
+/// Precondition: VLMul is not LMUL_RESERVED or LMUL_8.
+static RISCVII::VLMUL twoTimesVLMUL(RISCVII::VLMUL VLMul) {
+  switch (VLMul) {
+  case RISCVII::VLMUL::LMUL_F8:
+    return RISCVII::VLMUL::LMUL_F4;
+  case RISCVII::VLMUL::LMUL_F4:
+    return RISCVII::VLMUL::LMUL_F2;
+  case RISCVII::VLMUL::LMUL_F2:
+    return RISCVII::VLMUL::LMUL_1;
+  case RISCVII::VLMUL::LMUL_1:
+    return RISCVII::VLMUL::LMUL_2;
+  case RISCVII::VLMUL::LMUL_2:
+    return RISCVII::VLMUL::LMUL_4;
+  case RISCVII::VLMUL::LMUL_4:
+    return RISCVII::VLMUL::LMUL_8;
+  case RISCVII::VLMUL::LMUL_8:
+  default:
+    llvm_unreachable("Could not multiply VLMul by 2");
+  }
+}
+
+/// Return the RISCVII::VLMUL that is VLMul / 2.
+/// Precondition: VLMul is not LMUL_RESERVED or LMUL_MF8.
+static RISCVII::VLMUL halfVLMUL(RISCVII::VLMUL VLMul) {
+  switch (VLMul) {
+  case RISCVII::VLMUL::LMUL_F4:
+    return RISCVII::VLMUL::LMUL_F8;
+  case RISCVII::VLMUL::LMUL_F2:
+    return RISCVII::VLMUL::LMUL_F4;
+  case RISCVII::VLMUL::LMUL_1:
+    return RISCVII::VLMUL::LMUL_F2;
+  case RISCVII::VLMUL::LMUL_2:
+    return RISCVII::VLMUL::LMUL_1;
+  case RISCVII::VLMUL::LMUL_4:
+    return RISCVII::VLMUL::LMUL_2;
+  case RISCVII::VLMUL::LMUL_8:
+    return RISCVII::VLMUL::LMUL_4;
+  case RISCVII::VLMUL::LMUL_F8:
+  default:
+    llvm_unreachable("Could not divide VLMul by 2");
+  }
+}
+
+/// Return EMUL = (EEW / SEW) * LMUL where EEW comes from Log2EEW and LMUL and
+/// SEW are from the TSFlags of MI.
+static std::pair<unsigned, bool>
+getEMULEqualsEEWDivSEWTimesLMUL(unsigned Log2EEW, const MachineInstr &MI) {
+  RISCVII::VLMUL MIVLMUL = RISCVII::getLMul(MI.getDesc().TSFlags);
+  auto [MILMUL, MILMULIsFractional] = RISCVVType::decodeVLMUL(MIVLMUL);
+  unsigned MILog2SEW =
+      MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+  unsigned MISEW = 1 << MILog2SEW;
+
+  unsigned EEW = 1 << Log2EEW;
+  // Calculate (EEW/SEW)*LMUL preserving fractions less than 1. Use GCD
+  // to put fraction in simplest form.
+  unsigned Num = EEW, Denom = MISEW;
+  int GCD = MILMULIsFractional ? std::gcd(Num, Denom * MILMUL)
+                               : std::gcd(Num * MILMUL, Denom);
+  Num = MILMULIsFractional ? Num / GCD : Num * MILMUL / GCD;
+  Denom = MILMULIsFractional ? Denom * MILMUL / GCD : Denom / GCD;
+  return std::make_pair(Num > Denom ? Num : Denom, Denom > Num);
+}
+
+/// An index segment load or store operand has the form v.*seg<nf>ei<eeew>.v.
+/// Data has EEW=SEW, EMUL=LMUL. Index has EEW=<eew>, EMUL=(EEW/SEW)*LMUL. LMUL
+/// and SEW comes from TSFlags of MI.
+static OperandInfo
+getIndexSegmentLoadStoreOperandInfo(unsigned Log2EEW, const MachineInstr &MI,
+                                    const MachineOperand &MO) {
+  // Operand 0 is data register
+  // Data vector register group has EEW=SEW, EMUL=LMUL.
+  if (MO.getOperandNo() == 0) {
+    RISCVII::VLMUL MIVLMul = RISCVII::getLMul(MI.getDesc().TSFlags);
+    unsigned MILog2SEW =
+        MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+    return OperandInfo(MIVLMul, MILog2SEW);
+  }
+
+  // Operand 1 is index vector register
+  // v.*seg<nf>ei<eeew>.v
+  // Index vector register group has EEW=<eew>, EMUL=(EEW/SEW)*LMUL.
+  if (MO.getOperandNo() == 2)
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(Log2EEW, MI), Log2EEW);
+
+  llvm_unreachable("Could not get OperandInfo for non-vector register of an "
+                   "indexed segment load or store instruction");
+}
+
+/// Dest has EEW=SEW and EMUL=LMUL. Source EEW=SEW/Factor (i.e. F2 => EEW/2).
+/// Source has EMUL=(EEW/SEW)*LMUL. LMUL and SEW comes from TSFlags of MI.
+static OperandInfo getIntegerExtensionOperandInfo(unsigned Factor,
+                                                  const MachineInstr &MI,
+                                                  const MachineOperand &MO) {
+  RISCVII::VLMUL MIVLMul = RISCVII::getLMul(MI.getDesc().TSFlags);
+  unsigned MILog2SEW =
+      MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+
+  if (MO.getOperandNo() == 0)
+    return OperandInfo(MIVLMul, MILog2SEW);
+
+  unsigned MISEW = 1 << MILog2SEW;
+  unsigned EEW = MISEW / Factor;
+  unsigned Log2EEW = Log2_32(EEW);
+
+  return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(Log2EEW, MI), Log2EEW);
+}
+
+/// Check whether MO is a mask operand of MI.
+static bool isMaskOperand(const MachineInstr &MI, const MachineOperand &MO,
+                          const MachineRegisterInfo *MRI) {
+
+  if (!MO.isReg() || !isVectorRegClass(MO.getReg(), MRI))
+    return false;
+
+  const MCInstrDesc &Desc = MI.getDesc();
+  return Desc.operands()[MO.getOperandNo()].RegClass == RISCV::VMV0RegClassID;
+}
+
+/// Return the OperandInfo for MO, which is an operand of MI.
+static OperandInfo getOperandInfo(const MachineInstr &MI,
+                                  const MachineOperand &MO,
+                                  const MachineRegisterInfo *MRI) {
+  const RISCVVPseudosTable::PseudoInfo *RVV =
+      RISCVVPseudosTable::getPseudoInfo(MI.getOpcode());
+  assert(RVV && "Could not find MI in PseudoTable");
+
+  // MI has a VLMUL and SEW associated with it. The RVV specification defines
+  // the LMUL and SEW of each operand and definition in relation to MI.VLMUL and
+  // MI.SEW.
+  RISCVII::VLMUL MIVLMul = RISCVII::getLMul(MI.getDesc().TSFlags);
+  unsigned MILog2SEW =
+      MI.getOperand(RISCVII::getSEWOpNum(MI.getDesc())).getImm();
+  bool IsMODef = MO.getOperandNo() == 0;
+
+  // All mask operands have EEW=1, EMUL=(EEW/SEW)*LMUL
+  if (isMaskOperand(MI, MO, MRI))
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(0, MI), 0);
+
+  // TODO: Pseudos that end in _MASK or _TU can have a merge operand.
+  // We bail out early for instructions that have merge operands for now.
+  // Merge operand is first operand after defs.
+  if (MO.getOperandNo() == MI.getNumExplicitDefs() && MO.isReg() && MO.isTied())
+    return OperandInfo(OperandInfo::State::Unknown);
+
+  // switch against BaseInstr to reduce number of cases that need to be
+  // considered.
+  switch (RVV->BaseInstr) {
+
+  // 6. Configuration-Setting Instructions
+  // Configuration setting instructions do not read or write vector registers
+  case RISCV::VSETIVLI:
+  case RISCV::VSETVL:
+  case RISCV::VSETVLI:
+    llvm_unreachable("Configuration setting instructions do not read or write "
+                     "vector registers");
+
+  // 7. Vector Loads and Stores
+  // 7.4. Vector Unit-Stride Instructions
+  // 7.5. Vector Strided Instructions
+  // 7.7. Unit-stride Fault-Only-First Loads
+  /// Dest EEW encoded in the instruction and EMUL=(EEW/SEW)*LMUL
+  case RISCV::VLE8_V:
+  case RISCV::VSE8_V:
+  case RISCV::VLM_V:
+  case RISCV::VSM_V:
+  case RISCV::VLSE8_V:
+  case RISCV::VSSE8_V:
+  case RISCV::VLE8FF_V:
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(3, MI), 3);
+  case RISCV::VLE16_V:
+  case RISCV::VSE16_V:
+  case RISCV::VLSE16_V:
+  case RISCV::VSSE16_V:
+  case RISCV::VLE16FF_V:
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(4, MI), 4);
+  case RISCV::VLE32_V:
+  case RISCV::VSE32_V:
+  case RISCV::VLSE32_V:
+  case RISCV::VSSE32_V:
+  case RISCV::VLE32FF_V:
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(5, MI), 5);
+  case RISCV::VLE64_V:
+  case RISCV::VSE64_V:
+  case RISCV::VLSE64_V:
+  case RISCV::VSSE64_V:
+  case RISCV::VLE64FF_V:
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(6, MI), 6);
+
+  // 7.6. Vector Indexed Instructions
+  // Data EEW=SEW, EMUL=LMUL. Index EEW=<eew> and EMUL=(EEW/SEW)*LMUL
+  case RISCV::VLUXEI8_V:
+  case RISCV::VLOXEI8_V:
+  case RISCV::VSUXEI8_V:
+  case RISCV::VSOXEI8_V:
+    if (MO.getOperandNo() == 0)
+      return OperandInfo(MIVLMul, MILog2SEW);
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(3, MI), 3);
+  case RISCV::VLUXEI16_V:
+  case RISCV::VLOXEI16_V:
+  case RISCV::VSUXEI16_V:
+  case RISCV::VSOXEI16_V:
+    if (MO.getOperandNo() == 0)
+      return OperandInfo(MIVLMul, MILog2SEW);
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(4, MI), 4);
+  case RISCV::VLUXEI32_V:
+  case RISCV::VLOXEI32_V:
+  case RISCV::VSUXEI32_V:
+  case RISCV::VSOXEI32_V:
+    if (MO.getOperandNo() == 0)
+      return OperandInfo(MIVLMul, MILog2SEW);
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(5, MI), 5);
+  case RISCV::VLUXEI64_V:
+  case RISCV::VLOXEI64_V:
+  case RISCV::VSUXEI64_V:
+  case RISCV::VSOXEI64_V:
+    if (MO.getOperandNo() == 0)
+      return OperandInfo(MIVLMul, MILog2SEW);
+    return OperandInfo(getEMULEqualsEEWDivSEWTimesLMUL(6, MI), 6);
+
+  // 7.8. Vector Load/Store Segment Instructions
+  // 7.8.1. Vector Unit-Stride Segment Loads and Stores
+  // v.*seg<nf>e<eew>.*
+  // EEW=eew, EMUL=LMUL
+  case RISCV::VLSEG2E8_V:
+  case RISCV::VLSEG2E8FF_V:
+  case RISCV::VLSEG3E8_V:
+  case RISCV::VLSEG3E8FF_V:
+  case RISCV::VLSEG4E8_V:
+  case RISCV::VLSEG4E8FF_V:
+  case RISCV::VLSEG5E8_V:
+  case RISCV::VLSEG5E8FF_V:
+  case RISCV::VLSEG6E8_V:
+  case RISCV::VLSEG6E8FF_V:
+  case RISCV::VLSEG7E8_V:
+  case RISCV::VLSEG7E8FF_V:
+  case RISCV::VLSEG8E8_V:
+  case RISCV::VLSEG8E8FF_V:
+  case RISCV::VSSEG2E8_V:
+  case RISCV::VSSEG3E8_V:
+  case RISCV::VSSEG4E8_V:
+  case RISCV::VSSEG5E8_V:
+  case RISCV::VSSEG6E8_V:
+  case RISCV::VSSEG7E8_V:
+  case RISCV::VSSEG8E8_V:
+    return OperandInfo(MIVLMul, 3);
+  case RISCV::VLSEG2E16_V:
+  case RISCV::VLSEG2E16FF_V:
+  case RISCV::VLSEG3E16_V:
+  case RISCV::VLSEG3E16FF_V:
+  case RISCV::VLSEG4E16_V:
+  case RISCV::VLSEG4E16FF_V:
+  case RISCV::VLSEG5E16_V:
+  case RISCV::VLSEG5E16FF_V:
+  case RISCV::VLSEG6E16_V:
+  case RISCV::VLSEG6E16FF_V:
+  case RISCV::VLSEG7E16_V:
+  case RISCV::VLSEG7E16FF_V:
+  case RISCV::VLSEG8E16_V:
+  case RISCV::VLSEG8E16FF_V:
+  case RISCV::VSSEG2E16_V:
+  case RISCV::VSSEG3E16_V:
+  case RISCV::VSSEG4E16_V:
+  case RISCV::VSSEG5E16_V:
+  case RISCV::VSSEG6E16_V:
+  case RISCV::VSSEG7E16_V:
+  case RISCV::VSSEG8E16_V:
+    return OperandInfo(MIVLMul, 4);
+  case RISCV::VLSEG2E32_V:
+  case RISCV::VLSEG2E32FF_V:
+  case RISCV::VLSEG3E32_V:
+  case RISCV::VLSEG3E32FF_V:
+  case RISCV::VLSEG4E32_V:
+  case RISCV::VLSEG4E32FF_V:
+  case RISCV::VLSEG5E32_V:
+  case RISCV::VLSEG5E32FF_V:
+  case RISCV::VLSEG6E32_V:
+  case RISCV::VLSEG6E32FF_V:
+  case RISCV::VLSEG7E32_V:
+  case RISCV::VLSEG7E32FF_V:
+  case RISCV::VLSEG8E32_V:
+  case RISCV::VLSEG8E32FF_V:
+  case RISCV::VSSEG2E32_V:
+  case RISCV::VSSEG3E32_V:
+  case RISCV::VSSEG4E32_V:
+  case RISCV::VSSEG5E32_V:
+  case RISCV::VSSEG6E32_V:
+  case RISCV::VSSEG7E32_V:
+  case RISCV::VSSEG8E32_V:
+    return OperandInfo(MIVLMul, 5);
+  case RISCV::VLSEG2E64_V:
+  case RISCV::VLSEG2E64FF_V:
+  case RISCV::VLSEG3E64_V:
+  case RISCV::VLSEG3E64FF_V:
+  case RISCV::VLSEG4E64_V:
+  case RISCV::VLSEG4E64FF_V:
+  case RISCV::VLSEG5E64_V:
+  case RISCV::VLSEG5E64FF_V:
+  case RISCV::VLSEG6E64_V:
+  case RISCV::VLSEG6E64FF_V:
+  case RISCV::VLSEG7E64_V:
+  case RISCV::VLSEG7E64FF_V:
+  case RISCV::VLSEG8E64_V:
+  case RISCV::VLSEG8E64FF_V:
+  case RISCV::VSSEG2E64_V:
+  case RISCV::VSSEG3E64_V:
+  case RISCV::VSSEG4E64_V:
+  case RISCV::VSSEG5E64_V:
+  case RISCV::VSSEG6E64_V:
+  case RISCV::VSSEG7E64_V:
+  case RISCV::VSSEG8E64_V:
+    return OperandInfo(MIVLMul, 6);
+
+  // 7.8.2. Vector Strided Segment Loads and Stores
+  case RISCV::VLSSEG2E8_V:
+  case RISCV::VLSSEG3E8_V:
+  case RISCV::VLSSEG4E8_V:
+  case RISCV::VLSSEG5E8_V:
+  case RISCV::VLSSEG6E8_V:
+  case RISCV::VLSSEG7E8_V:
+  case RISCV::VLSSEG8E8_V:
+  case RISCV::VSSSEG2E8_V:
+  case RISCV::VSSSEG3E8_V:
+  case RISCV::VSSSEG4E8_V:
+  case RISCV::VSSSEG5E8_V:
+  case RISCV::VSSSEG6E8_V:
+  case RISCV::VSSSEG7E8_V:
+  case RISCV::VSSSEG8E8_V:
+    return OperandInfo(MIVLMul, 3);
+  case RISCV::VLSSEG2E16_V:
+  case RISCV::VLSSEG3E16_V:
+  case RISCV::VLSSEG4E16_V:
+  case RISCV::VLSSEG5E16_V:
+  case RISCV::VLSSEG6E16_V:
+  case RISCV::VLSSEG7E16_V:
+  case RISCV::VLSSEG8E16_V:
+  case RISCV::VSSSEG2E16_V:
+  case RISCV::VSSSEG3E16_V:
+  case RISCV::VSSSEG4E16_V:
+  case RISCV::VSSSEG5E16_V:
+  case RISCV::VSSSEG6E16_V:
+  case RISCV::VSSSEG7E16_V:
+  case RISCV::VSSSEG8E16_V:
+    return OperandInfo(MIVLMul, 4);
+  case RISCV::VLSSEG2E32_V:
+  case RISCV::VLSSEG3E32_V:
+  case RISCV::VLSSEG4E32_V:
+  case RISCV::VLSSEG5E32_V:
+  case RISCV::VLSSEG6E32_V:
+  case RISCV::VLSSEG7E32_V:
+  case RISCV::VLSSEG8E32_V:
+  case RISCV::VSSSEG2E32_V:
+  case RISCV::VSSSEG3E32_V:
+  case RISCV::VSSSEG4E32_V:
+  case RISCV::VSSSEG5E32_V:
+  case RISCV::VSSSEG6E32_V:
+  case RISCV::VSSSEG7E32_V:
+  case RISCV::VSSSEG8E32_V:
+    return OperandInfo(MIVLMul, 5);
+  case RISCV::VLSSEG2E64_V:
+  case RISCV::VLSSEG3E64_V:
+  case RISCV::VLSSEG4E64_V:
+  case RISCV::VLSSEG5E64_V:
+  case RISCV::VLSSEG6E64_V:
+  case RISCV::VLSSEG7E64_V:
+  case RISCV::VLSSEG8E64_V:
+  case RISCV::VSSSEG2E64_V:
+  case RISCV::VSSSEG3E64_V:
+  case RISCV::VSSSEG4E64_V:
+  case RISCV::VSSSEG5E64_V:
+  case RISCV::VSSSEG6E64_V:
+  case RISCV::VSSSEG7E64_V:
+  case RISCV::VSSSEG8E64_V:
+    return OperandInfo(MIVLMul, 6);
+
+  // 7.8.3. Vector Indexed Segment Loads and Stores
+  case RISCV::VLUXSEG2EI8_V:
+  case RISCV::VLUXSEG3EI8_V:
+  case RISCV::VLUXSEG4EI8_V:
+  case RISCV::VLUXSEG5EI8_V:
+  case RISCV::VLUXSEG6EI8_V:
+  case RISCV::VLUXSEG7EI8_V:
+  case RISCV::VLUXSEG8EI8_V:
+  case RISCV::VLOXSEG2EI8_V:
+  case RISCV::VLOXSEG3EI8_V:
+  case RISCV::VLOXSEG4EI8_V:
+  case RISCV::VLOXSEG5EI8_V:
+  case RISCV::VLOXSEG6EI8_V:
+  case RISCV::VLOXSEG7EI8_V:
+  case RISCV::VLOXSEG8EI8_V:
+  case RISCV::VSUXSEG2EI8_V:
+  case RISCV::VSUXSEG3EI8_V:
+  case RISCV::VSUXSEG4EI8_V:
+  case RISCV::VSUXSEG5EI8_V:
+  case RISCV::VSUXSEG6EI8_V:
+  case RISCV::VSUXSEG7EI8_V:
+  case RISCV::VSUXSEG8EI8_V:
+  case RISCV::VSOXSEG2EI8_V...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/108640


More information about the llvm-commits mailing list