[llvm] f68f04d - [RISCV] Add vendor-defined XTheadCondMov (conditional move) extension

Philipp Tomsich via llvm-commits llvm-commits at lists.llvm.org
Fri Feb 24 12:41:06 PST 2023


Author: Philipp Tomsich
Date: 2023-02-24T21:40:42+01:00
New Revision: f68f04d07c69049a95a5f43b5d001ee5a2c87338

URL: https://github.com/llvm/llvm-project/commit/f68f04d07c69049a95a5f43b5d001ee5a2c87338
DIFF: https://github.com/llvm/llvm-project/commit/f68f04d07c69049a95a5f43b5d001ee5a2c87338.diff

LOG: [RISCV] Add vendor-defined XTheadCondMov (conditional move) extension

The vendor-defined XTheadCondMov (somewhat related to the upcoming
Zicond and XVentanaCondOps) extension add conditional move
instructions with $rd being an input and an ouput instructions.

It is supported by the C9xx cores (e.g., found in the wild in the
Allwinner D1) by Alibaba T-Head.

The current (as of this commit) public documentation for this
extension is available at:
  https://github.com/T-head-Semi/thead-extension-spec/releases/download/2.2.2/xthead-2023-01-30-2.2.2.pdf

Support for these instructions has already landed in GNU Binutils:
  https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=73442230966a22b3238b2074691a71d7b4ed914a

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D144681

Added: 
    llvm/test/CodeGen/RISCV/condops.ll
    llvm/test/MC/RISCV/xtheadcondmov-invalid.s
    llvm/test/MC/RISCV/xtheadcondmov-valid.s

Modified: 
    llvm/docs/RISCVUsage.rst
    llvm/docs/ReleaseNotes.rst
    llvm/lib/Support/RISCVISAInfo.cpp
    llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp
    llvm/lib/Target/RISCV/RISCVFeatures.td
    llvm/lib/Target/RISCV/RISCVISelLowering.cpp
    llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
    llvm/lib/Target/RISCV/RISCVInstrInfoXTHead.td
    llvm/test/CodeGen/RISCV/attributes.ll

Removed: 
    llvm/test/CodeGen/RISCV/xventanacondops.ll


################################################################################
diff  --git a/llvm/docs/RISCVUsage.rst b/llvm/docs/RISCVUsage.rst
index 5be15a158cf37..429b59a8d9404 100644
--- a/llvm/docs/RISCVUsage.rst
+++ b/llvm/docs/RISCVUsage.rst
@@ -186,6 +186,9 @@ The current vendor extensions supported are:
 ``XTHeadBs``
   LLVM implements `the THeadBs (single-bit operations) vendor-defined instructions specified in <https://github.com/T-head-Semi/thead-extension-spec/releases/download/2.2.2/xthead-2023-01-30-2.2.2.pdf>`_  by T-HEAD of Alibaba.  Instructions are prefixed with `th.` as described in the specification.
 
+``XTHeadCondMov``
+  LLVM implements `the THeadCondMov (conditional move) vendor-defined instructions specified in <https://github.com/T-head-Semi/thead-extension-spec/releases/download/2.2.2/xthead-2023-01-30-2.2.2.pdf>`_  by T-HEAD of Alibaba.  Instructions are prefixed with `th.` as described in the specification.
+
 ``XTHeadCmo``
   LLVM implements `the THeadCmo (cache management operations) vendor-defined instructions specified in <https://github.com/T-head-Semi/thead-extension-spec/releases/download/2.2.2/xthead-2023-01-30-2.2.2.pdf>`_  by T-HEAD of Alibaba.  Instructions are prefixed with `th.` as described in the specification.
 

diff  --git a/llvm/docs/ReleaseNotes.rst b/llvm/docs/ReleaseNotes.rst
index e20a59bcbe121..ef7dacd82bf07 100644
--- a/llvm/docs/ReleaseNotes.rst
+++ b/llvm/docs/ReleaseNotes.rst
@@ -114,6 +114,7 @@ Changes to the RISC-V Backend
 * Adds support for the vendor-defined XTHeadBa (address-generation) extension.
 * Adds support for the vendor-defined XTHeadBb (basic bit-manipulation) extension.
 * Adds support for the vendor-defined XTHeadBs (single-bit) extension.
+* Adds support for the vendor-defined XTHeadCondMov (conditional move) extension.
 * Adds support for the vendor-defined XTHeadMac (multiply-accumulate instructions) extension.
 * Added support for the vendor-defined XTHeadMemPair (two-GPR memory operations)
   extension disassembler/assembler.

diff  --git a/llvm/lib/Support/RISCVISAInfo.cpp b/llvm/lib/Support/RISCVISAInfo.cpp
index d36a783ded4dd..b17b230a3acc2 100644
--- a/llvm/lib/Support/RISCVISAInfo.cpp
+++ b/llvm/lib/Support/RISCVISAInfo.cpp
@@ -117,6 +117,7 @@ static const RISCVSupportedExtension SupportedExtensions[] = {
     {"xtheadbb", RISCVExtensionVersion{1, 0}},
     {"xtheadbs", RISCVExtensionVersion{1, 0}},
     {"xtheadcmo", RISCVExtensionVersion{1, 0}},
+    {"xtheadcondmov", RISCVExtensionVersion{1, 0}},
     {"xtheadfmemidx", RISCVExtensionVersion{1, 0}},
     {"xtheadmac", RISCVExtensionVersion{1, 0}},
     {"xtheadmemidx", RISCVExtensionVersion{1, 0}},

diff  --git a/llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp b/llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp
index eccaa9d03f514..15352c1c0885d 100644
--- a/llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp
+++ b/llvm/lib/Target/RISCV/Disassembler/RISCVDisassembler.cpp
@@ -520,6 +520,13 @@ DecodeStatus RISCVDisassembler::getInstruction(MCInst &MI, uint64_t &Size,
       if (Result != MCDisassembler::Fail)
         return Result;
     }
+    if (STI.hasFeature(RISCV::FeatureVendorXTHeadCondMov)) {
+      LLVM_DEBUG(dbgs() << "Trying XTHeadCondMov custom opcode table:\n");
+      Result = decodeInstruction(DecoderTableTHeadCondMov32, MI, Insn, Address,
+                                 this, STI);
+      if (Result != MCDisassembler::Fail)
+        return Result;
+    }
     if (STI.hasFeature(RISCV::FeatureVendorXTHeadCmo)) {
       LLVM_DEBUG(dbgs() << "Trying XTHeadCmo custom opcode table:\n");
       Result = decodeInstruction(DecoderTableTHeadCmo32, MI, Insn, Address,

diff  --git a/llvm/lib/Target/RISCV/RISCVFeatures.td b/llvm/lib/Target/RISCV/RISCVFeatures.td
index ad789170ce1ae..b439563fc2b91 100644
--- a/llvm/lib/Target/RISCV/RISCVFeatures.td
+++ b/llvm/lib/Target/RISCV/RISCVFeatures.td
@@ -505,6 +505,13 @@ def HasVendorXTHeadBs : Predicate<"Subtarget->hasVendorXTHeadBs()">,
                                   AssemblerPredicate<(all_of FeatureVendorXTHeadBs),
                                   "'xtheadbs' (T-Head single-bit instructions)">;
 
+def FeatureVendorXTHeadCondMov
+    : SubtargetFeature<"xtheadcondmov", "HasVendorXTHeadCondMov", "true",
+                       "'xtheadcondmov' (T-Head conditional move instructions)">;
+def HasVendorXTHeadCondMov : Predicate<"Subtarget->hasVendorXTHeadCondMov()">,
+                                       AssemblerPredicate<(all_of FeatureVendorXTHeadCondMov),
+                                       "'xtheadcondmov' (T-Head conditional move instructions)">;
+
 def FeatureVendorXTHeadCmo
     : SubtargetFeature<"xtheadcmo", "HasVendorXTHeadCmo", "true",
                        "'xtheadcmo' (T-Head cache management instructions)">;

diff  --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 3b95ff35b346c..6b93d53d7dbf3 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -323,7 +323,8 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM,
   if (Subtarget.is64Bit())
     setOperationAction(ISD::ABS, MVT::i32, Custom);
 
-  if (!Subtarget.hasVendorXVentanaCondOps())
+  if (!Subtarget.hasVendorXVentanaCondOps() &&
+      !Subtarget.hasVendorXTHeadCondMov())
     setOperationAction(ISD::SELECT, XLenVT, Custom);
 
   static const unsigned FPLegalNodeTypes[] = {

diff  --git a/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp b/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
index 000beac3f8e8e..7d060ad77186f 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
@@ -2102,6 +2102,15 @@ bool RISCVInstrInfo::findCommutedOpIndices(const MachineInstr &MI,
     return false;
 
   switch (MI.getOpcode()) {
+  case RISCV::TH_MVEQZ:
+  case RISCV::TH_MVNEZ:
+    // We can't commute operands if operand 2 (i.e., rs1 in
+    // mveqz/mvnez rd,rs1,rs2) is the zero-register (as it is
+    // not valid as the in/out-operand 1).
+    if (MI.getOperand(2).getReg() == RISCV::X0)
+      return false;
+    // Operands 1 and 2 are commutable, if we switch the opcode.
+    return fixCommutedOpIndices(SrcOpIdx1, SrcOpIdx2, 1, 2);
   case RISCV::TH_MULA:
   case RISCV::TH_MULAW:
   case RISCV::TH_MULAH:
@@ -2258,6 +2267,14 @@ MachineInstr *RISCVInstrInfo::commuteInstructionImpl(MachineInstr &MI,
   };
 
   switch (MI.getOpcode()) {
+  case RISCV::TH_MVEQZ:
+  case RISCV::TH_MVNEZ: {
+    auto &WorkingMI = cloneIfNew(MI);
+    WorkingMI.setDesc(get(MI.getOpcode() == RISCV::TH_MVEQZ ? RISCV::TH_MVNEZ
+                                                            : RISCV::TH_MVEQZ));
+    return TargetInstrInfo::commuteInstructionImpl(WorkingMI, false, OpIdx1,
+                                                   OpIdx2);
+  }
   case RISCV::PseudoCCMOVGPR: {
     // CCMOV can be commuted by inverting the condition.
     auto CC = static_cast<RISCVCC::CondCode>(MI.getOperand(3).getImm());

diff  --git a/llvm/lib/Target/RISCV/RISCVInstrInfoXTHead.td b/llvm/lib/Target/RISCV/RISCVInstrInfoXTHead.td
index 4feff6323890b..448ec9cc30dfe 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoXTHead.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoXTHead.td
@@ -106,6 +106,14 @@ class THShiftW_ri<bits<7> funct7, bits<3> funct3, string opcodestr>
                     (ins GPR:$rs1, uimm5:$shamt),
                     opcodestr, "$rd, $rs1, $shamt">;
 
+let Predicates = [HasVendorXTHeadCondMov], DecoderNamespace = "THeadCondMov",
+    hasSideEffects = 0, mayLoad = 0, mayStore = 0, isCommutable = 1 in
+class THCondMov_rr<bits<7> funct7, string opcodestr>
+    : RVInstR<funct7, 0b001, OPC_CUSTOM_0, (outs GPR:$rd_wb),
+              (ins GPR:$rd, GPR:$rs1, GPR:$rs2),
+              opcodestr, "$rd, $rs1, $rs2"> {
+  let Constraints = "$rd_wb = $rd";
+}
 
 let Predicates = [HasVendorXTHeadMac], DecoderNamespace = "THeadMac",
     hasSideEffects = 0, mayLoad = 0, mayStore = 0, isCommutable = 1 in
@@ -250,6 +258,11 @@ def TH_TST : RVBShift_ri<0b10001, 0b001, OPC_CUSTOM_0, "th.tst">,
              Sched<[WriteSingleBitImm, ReadSingleBitImm]>;
 } // Predicates = [HasVendorXTHeadBs]
 
+let Predicates = [HasVendorXTHeadCondMov] in {
+def TH_MVEQZ  : THCondMov_rr<0b0100000, "th.mveqz">;
+def TH_MVNEZ  : THCondMov_rr<0b0100001, "th.mvnez">;
+} // Predicates = [HasVendorXTHeadCondMov]
+
 let Predicates = [HasVendorXTHeadMac] in {
 def TH_MULA  : THMulAccumulate_rr<0b0010000, "th.mula">;
 def TH_MULS  : THMulAccumulate_rr<0b0010001, "th.muls">;
@@ -625,6 +638,21 @@ def : Pat<(seteq (and GPR:$rs1, SingleBitSetMask:$mask), 0),
           (TH_TST (XORI GPR:$rs1, -1), SingleBitSetMask:$mask)>;
 } // Predicates = [HasVendorXTHeadBs]
 
+let Predicates = [HasVendorXTHeadCondMov] in {
+def : Pat<(select invcondop:$cond, GPR:$a, GPR:$b),
+          (TH_MVNEZ $a, $b, $cond)>;
+def : Pat<(select condop:$cond, GPR:$a, GPR:$b),
+          (TH_MVEQZ $a, $b, $cond)>;
+def : Pat<(select invcondop:$cond, GPR:$a, (XLenVT 0)),
+          (TH_MVNEZ $a, X0, $cond)>;
+def : Pat<(select condop:$cond, GPR:$a, (XLenVT 0)),
+          (TH_MVEQZ $a, X0, $cond)>;
+def : Pat<(select invcondop:$cond, (XLenVT 0), GPR:$b),
+          (TH_MVEQZ $b, X0, $cond)>;
+def : Pat<(select condop:$cond,  (XLenVT 0), GPR:$b),
+          (TH_MVNEZ $b, X0, $cond)>;
+} // Predicates = [HasVendorXTHeadCondMov]
+
 let Predicates = [HasVendorXTHeadMac] in {
 def : Pat<(add GPR:$rd, (mul GPR:$rs1, GPR:$rs2)), (TH_MULA GPR:$rd, GPR:$rs1, GPR:$rs2)>;
 def : Pat<(sub GPR:$rd, (mul GPR:$rs1, GPR:$rs2)), (TH_MULS GPR:$rd, GPR:$rs1, GPR:$rs2)>;

diff  --git a/llvm/test/CodeGen/RISCV/attributes.ll b/llvm/test/CodeGen/RISCV/attributes.ll
index cbc22780a10c7..1117505e15e54 100644
--- a/llvm/test/CodeGen/RISCV/attributes.ll
+++ b/llvm/test/CodeGen/RISCV/attributes.ll
@@ -42,6 +42,7 @@
 ; RUN: llc -mtriple=riscv32 -mattr=+svpbmt %s -o - | FileCheck --check-prefixes=CHECK,RV32SVPBMT %s
 ; RUN: llc -mtriple=riscv32 -mattr=+svinval %s -o - | FileCheck --check-prefixes=CHECK,RV32SVINVAL %s
 ; RUN: llc -mtriple=riscv32 -mattr=+xtheadcmo %s -o - | FileCheck --check-prefix=RV32XTHEADCMO %s
+; RUN: llc -mtriple=riscv32 -mattr=+xtheadcondmov %s -o - | FileCheck --check-prefix=RV32XTHEADCONDMOV %s
 ; RUN: llc -mtriple=riscv32 -mattr=+xtheadfmemidx %s -o - | FileCheck --check-prefix=RV32XTHEADFMEMIDX %s
 ; RUN: llc -mtriple=riscv32 -mattr=+xtheadmac %s -o - | FileCheck --check-prefixes=CHECK,RV32XTHEADMAC %s
 ; RUN: llc -mtriple=riscv32 -mattr=+xtheadmemidx %s -o - | FileCheck --check-prefix=RV32XTHEADMEMIDX %s
@@ -101,6 +102,7 @@
 ; RUN: llc -mtriple=riscv64 -mattr=+xtheadbb %s -o - | FileCheck --check-prefixes=CHECK,RV64XTHEADBB %s
 ; RUN: llc -mtriple=riscv64 -mattr=+xtheadbs %s -o - | FileCheck --check-prefixes=CHECK,RV64XTHEADBS %s
 ; RUN: llc -mtriple=riscv64 -mattr=+xtheadcmo %s -o - | FileCheck --check-prefix=RV64XTHEADCMO %s
+; RUN: llc -mtriple=riscv64 -mattr=+xtheadcondmov %s -o - | FileCheck --check-prefix=RV64XTHEADCONDMOV %s
 ; RUN: llc -mtriple=riscv64 -mattr=+xtheadfmemidx %s -o - | FileCheck --check-prefix=RV64XTHEADFMEMIDX %s
 ; RUN: llc -mtriple=riscv64 -mattr=+xtheadmac %s -o - | FileCheck --check-prefixes=CHECK,RV64XTHEADMAC %s
 ; RUN: llc -mtriple=riscv64 -mattr=+xtheadmemidx %s -o - | FileCheck --check-prefix=RV64XTHEADMEMIDX %s
@@ -159,6 +161,7 @@
 ; RV32SVPBMT: .attribute 5, "rv32i2p0_svpbmt1p0"
 ; RV32SVINVAL: .attribute 5, "rv32i2p0_svinval1p0"
 ; RV32XTHEADCMO: .attribute 5, "rv32i2p0_xtheadcmo1p0"
+; RV32XTHEADCONDMOV: .attribute 5, "rv32i2p0_xtheadcondmov1p0"
 ; RV32XTHEADFMEMIDX: .attribute 5, "rv32i2p0_f2p0_xtheadfmemidx1p0"
 ; RV32XTHEADMAC: .attribute 5, "rv32i2p0_xtheadmac1p0"
 ; RV32XTHEADMEMIDX: .attribute 5, "rv32i2p0_xtheadmemidx1p0"
@@ -218,6 +221,7 @@
 ; RV64XTHEADBB: .attribute 5, "rv64i2p0_xtheadbb1p0"
 ; RV64XTHEADBS: .attribute 5, "rv64i2p0_xtheadbs1p0"
 ; RV64XTHEADCMO: .attribute 5, "rv64i2p0_xtheadcmo1p0"
+; RV64XTHEADCONDMOV: .attribute 5, "rv64i2p0_xtheadcondmov1p0"
 ; RV64XTHEADFMEMIDX: .attribute 5, "rv64i2p0_f2p0_xtheadfmemidx1p0"
 ; RV64XTHEADMAC: .attribute 5, "rv64i2p0_xtheadmac1p0"
 ; RV64XTHEADMEMIDX: .attribute 5, "rv64i2p0_xtheadmemidx1p0"

diff  --git a/llvm/test/CodeGen/RISCV/condops.ll b/llvm/test/CodeGen/RISCV/condops.ll
new file mode 100644
index 0000000000000..2857786694d8c
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/condops.ll
@@ -0,0 +1,1100 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv64 -mattr=+xventanacondops < %s | FileCheck %s -check-prefix=RV64XVENTANACONDOPS
+; RUN: llc -mtriple=riscv64 -mattr=+xtheadcondmov < %s | FileCheck %s -check-prefix=RV64XTHEADCONDMOV
+
+define i64 @zero1(i64 %rs1, i1 zeroext %rc) {
+; RV64XVENTANACONDOPS-LABEL: zero1:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a0, zero, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2(i64 %rs1, i1 zeroext %rc) {
+; RV64XVENTANACONDOPS-LABEL: zero2:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a0, zero, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @add1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: add1:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    add a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: add1:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    add a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %add = add i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %add, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @add2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: add2:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    add a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: add2:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    add a0, a2, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %add = add i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %add, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @add3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: add3:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    add a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: add3:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    add a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %add = add i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs1, i64 %add
+  ret i64 %sel
+}
+
+define i64 @add4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: add4:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    add a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: add4:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    add a0, a2, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %add = add i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs2, i64 %add
+  ret i64 %sel
+}
+
+define i64 @sub1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: sub1:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    sub a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: sub1:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    sub a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %sub = sub i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %sub, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @sub2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: sub2:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    sub a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: sub2:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    sub a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %sub = sub i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs1, i64 %sub
+  ret i64 %sel
+}
+
+define i64 @or1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: or1:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: or1:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    or a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %or = or i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %or, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @or2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: or2:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: or2:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    or a0, a2, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %or = or i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %or, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @or3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: or3:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: or3:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    or a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %or = or i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs1, i64 %or
+  ret i64 %sel
+}
+
+define i64 @or4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: or4:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: or4:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    or a0, a2, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %or = or i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs2, i64 %or
+  ret i64 %sel
+}
+
+define i64 @xor1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: xor1:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: xor1:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %xor = xor i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %xor, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @xor2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: xor2:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: xor2:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a2, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %xor = xor i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %xor, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @xor3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: xor3:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: xor3:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %xor = xor i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs1, i64 %xor
+  ret i64 %sel
+}
+
+define i64 @xor4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: xor4:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: xor4:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a2, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %xor = xor i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs2, i64 %xor
+  ret i64 %sel
+}
+
+define i64 @and1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: and1:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    and a1, a1, a2
+; RV64XVENTANACONDOPS-NEXT:    or a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: and1:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    and a2, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, a1, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %and = and i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %and, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @and2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: and2:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    and a1, a2, a1
+; RV64XVENTANACONDOPS-NEXT:    or a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: and2:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    and a1, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %and = and i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %and, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @and3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: and3:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    and a1, a1, a2
+; RV64XVENTANACONDOPS-NEXT:    or a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: and3:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    and a2, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, a1, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %and = and i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs1, i64 %and
+  ret i64 %sel
+}
+
+define i64 @and4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: and4:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    and a1, a2, a1
+; RV64XVENTANACONDOPS-NEXT:    or a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: and4:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    and a1, a1, a2
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %and = and i64 %rs1, %rs2
+  %sel = select i1 %rc, i64 %rs2, i64 %and
+  ret i64 %sel
+}
+
+define i64 @basic(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: basic:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a2, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a2
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: basic:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @seteq(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: seteq:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: seteq:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setne(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setne:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setne:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setgt(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setgt:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    slt a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setgt:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    slt a0, a1, a0
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp sgt i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setge(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setge:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    slt a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setge:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    slt a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp sge i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setlt(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setlt:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    slt a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setlt:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    slt a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp slt i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setle(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setle:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    slt a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setle:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    slt a0, a1, a0
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp sle i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setugt(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setugt:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    sltu a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setugt:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    sltu a0, a1, a0
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ugt i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setuge(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setuge:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    sltu a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setuge:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    sltu a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp uge i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setult(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setult:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    sltu a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setult:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    sltu a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ult i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setule(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setule:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    sltu a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a3, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setule:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    sltu a0, a1, a0
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, a3, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ule i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @seteq_zero(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: seteq_zero:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: seteq_zero:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, 0
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setne_zero(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setne_zero:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a2, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a2
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setne_zero:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, 0
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @seteq_constant(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: seteq_constant:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, -123
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: seteq_constant:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, -123
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, 123
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setne_constant(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setne_constant:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, -456
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a2, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a2
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setne_constant:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, -456
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, 456
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @seteq_2048(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: seteq_2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: seteq_2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, 2048
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @seteq_neg2048(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: seteq_neg2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xori a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a1, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: seteq_neg2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xori a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, -2048
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @setne_neg2048(i64 %a, i64 %rs1, i64 %rs2) {
+; RV64XVENTANACONDOPS-LABEL: setne_neg2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xori a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a2, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    or a0, a0, a2
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: setne_neg2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xori a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, a2, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, -2048
+  %sel = select i1 %rc, i64 %rs1, i64 %rs2
+  ret i64 %sel
+}
+
+define i64 @zero1_seteq(i64 %a, i64 %b, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_seteq:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_seteq:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_seteq(i64 %a, i64 %b, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_seteq:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_seteq:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, %b
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_setne(i64 %a, i64 %b, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_setne:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_setne:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, %b
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_setne(i64 %a, i64 %b, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_setne:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xor a0, a0, a1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_setne:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xor a0, a0, a1
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a2, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a2
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, %b
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_seteq_zero(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_seteq_zero:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_seteq_zero:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, 0
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_seteq_zero(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_seteq_zero:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_seteq_zero:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, 0
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_setne_zero(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_setne_zero:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_setne_zero:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, 0
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_setne_zero(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_setne_zero:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_setne_zero:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, 0
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_seteq_constant(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_seteq_constant:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, 231
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_seteq_constant:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, 231
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, -231
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_seteq_constant(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_seteq_constant:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, -546
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_seteq_constant:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, -546
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, 546
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_setne_constant(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_setne_constant:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, -321
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_setne_constant:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, -321
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, 321
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_setne_constant(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_setne_constant:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    addi a0, a0, 654
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_setne_constant:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    addi a0, a0, 654
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, -654
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_seteq_neg2048(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_seteq_neg2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xori a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_seteq_neg2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xori a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, -2048
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_seteq_neg2048(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_seteq_neg2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xori a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_seteq_neg2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xori a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp eq i64 %a, -2048
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+define i64 @zero1_setne_neg2048(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero1_setne_neg2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xori a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero1_setne_neg2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xori a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, -2048
+  %sel = select i1 %rc, i64 %rs1, i64 0
+  ret i64 %sel
+}
+
+define i64 @zero2_setne_neg2048(i64 %a, i64 %rs1) {
+; RV64XVENTANACONDOPS-LABEL: zero2_setne_neg2048:
+; RV64XVENTANACONDOPS:       # %bb.0:
+; RV64XVENTANACONDOPS-NEXT:    xori a0, a0, -2048
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn a0, a1, a0
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: zero2_setne_neg2048:
+; RV64XTHEADCONDMOV:       # %bb.0:
+; RV64XTHEADCONDMOV-NEXT:    xori a0, a0, -2048
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez a1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:    mv a0, a1
+; RV64XTHEADCONDMOV-NEXT:    ret
+  %rc = icmp ne i64 %a, -2048
+  %sel = select i1 %rc, i64 0, i64 %rs1
+  ret i64 %sel
+}
+
+; Test that we are able to convert the sext.w int he loop to mv.
+define void @sextw_removal_maskc(i1 %c, i32 signext %arg, i32 signext %arg1) nounwind {
+; RV64XVENTANACONDOPS-LABEL: sextw_removal_maskc:
+; RV64XVENTANACONDOPS:       # %bb.0: # %bb
+; RV64XVENTANACONDOPS-NEXT:    addi sp, sp, -32
+; RV64XVENTANACONDOPS-NEXT:    sd ra, 24(sp) # 8-byte Folded Spill
+; RV64XVENTANACONDOPS-NEXT:    sd s0, 16(sp) # 8-byte Folded Spill
+; RV64XVENTANACONDOPS-NEXT:    sd s1, 8(sp) # 8-byte Folded Spill
+; RV64XVENTANACONDOPS-NEXT:    mv s0, a2
+; RV64XVENTANACONDOPS-NEXT:    andi a0, a0, 1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskc s1, a1, a0
+; RV64XVENTANACONDOPS-NEXT:  .LBB54_1: # %bb2
+; RV64XVENTANACONDOPS-NEXT:    # =>This Inner Loop Header: Depth=1
+; RV64XVENTANACONDOPS-NEXT:    mv a0, s1
+; RV64XVENTANACONDOPS-NEXT:    call bar at plt
+; RV64XVENTANACONDOPS-NEXT:    sllw s1, s1, s0
+; RV64XVENTANACONDOPS-NEXT:    bnez a0, .LBB54_1
+; RV64XVENTANACONDOPS-NEXT:  # %bb.2: # %bb7
+; RV64XVENTANACONDOPS-NEXT:    ld ra, 24(sp) # 8-byte Folded Reload
+; RV64XVENTANACONDOPS-NEXT:    ld s0, 16(sp) # 8-byte Folded Reload
+; RV64XVENTANACONDOPS-NEXT:    ld s1, 8(sp) # 8-byte Folded Reload
+; RV64XVENTANACONDOPS-NEXT:    addi sp, sp, 32
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: sextw_removal_maskc:
+; RV64XTHEADCONDMOV:       # %bb.0: # %bb
+; RV64XTHEADCONDMOV-NEXT:    addi sp, sp, -32
+; RV64XTHEADCONDMOV-NEXT:    sd ra, 24(sp) # 8-byte Folded Spill
+; RV64XTHEADCONDMOV-NEXT:    sd s0, 16(sp) # 8-byte Folded Spill
+; RV64XTHEADCONDMOV-NEXT:    sd s1, 8(sp) # 8-byte Folded Spill
+; RV64XTHEADCONDMOV-NEXT:    mv s0, a2
+; RV64XTHEADCONDMOV-NEXT:    mv s1, a1
+; RV64XTHEADCONDMOV-NEXT:    andi a0, a0, 1
+; RV64XTHEADCONDMOV-NEXT:    th.mveqz s1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:  .LBB54_1: # %bb2
+; RV64XTHEADCONDMOV-NEXT:    # =>This Inner Loop Header: Depth=1
+; RV64XTHEADCONDMOV-NEXT:    sext.w a0, s1
+; RV64XTHEADCONDMOV-NEXT:    call bar at plt
+; RV64XTHEADCONDMOV-NEXT:    sllw s1, s1, s0
+; RV64XTHEADCONDMOV-NEXT:    bnez a0, .LBB54_1
+; RV64XTHEADCONDMOV-NEXT:  # %bb.2: # %bb7
+; RV64XTHEADCONDMOV-NEXT:    ld ra, 24(sp) # 8-byte Folded Reload
+; RV64XTHEADCONDMOV-NEXT:    ld s0, 16(sp) # 8-byte Folded Reload
+; RV64XTHEADCONDMOV-NEXT:    ld s1, 8(sp) # 8-byte Folded Reload
+; RV64XTHEADCONDMOV-NEXT:    addi sp, sp, 32
+; RV64XTHEADCONDMOV-NEXT:    ret
+bb:
+  %i = select i1 %c, i32 %arg, i32 0
+  br label %bb2
+
+bb2:                                              ; preds = %bb2, %bb
+  %i3 = phi i32 [ %i, %bb ], [ %i5, %bb2 ]
+  %i4 = tail call signext i32 @bar(i32 signext %i3)
+  %i5 = shl i32 %i3, %arg1
+  %i6 = icmp eq i32 %i4, 0
+  br i1 %i6, label %bb7, label %bb2
+
+bb7:                                              ; preds = %bb2
+  ret void
+}
+declare signext i32 @bar(i32 signext)
+
+define void @sextw_removal_maskcn(i1 %c, i32 signext %arg, i32 signext %arg1) nounwind {
+; RV64XVENTANACONDOPS-LABEL: sextw_removal_maskcn:
+; RV64XVENTANACONDOPS:       # %bb.0: # %bb
+; RV64XVENTANACONDOPS-NEXT:    addi sp, sp, -32
+; RV64XVENTANACONDOPS-NEXT:    sd ra, 24(sp) # 8-byte Folded Spill
+; RV64XVENTANACONDOPS-NEXT:    sd s0, 16(sp) # 8-byte Folded Spill
+; RV64XVENTANACONDOPS-NEXT:    sd s1, 8(sp) # 8-byte Folded Spill
+; RV64XVENTANACONDOPS-NEXT:    mv s0, a2
+; RV64XVENTANACONDOPS-NEXT:    andi a0, a0, 1
+; RV64XVENTANACONDOPS-NEXT:    vt.maskcn s1, a1, a0
+; RV64XVENTANACONDOPS-NEXT:  .LBB55_1: # %bb2
+; RV64XVENTANACONDOPS-NEXT:    # =>This Inner Loop Header: Depth=1
+; RV64XVENTANACONDOPS-NEXT:    mv a0, s1
+; RV64XVENTANACONDOPS-NEXT:    call bar at plt
+; RV64XVENTANACONDOPS-NEXT:    sllw s1, s1, s0
+; RV64XVENTANACONDOPS-NEXT:    bnez a0, .LBB55_1
+; RV64XVENTANACONDOPS-NEXT:  # %bb.2: # %bb7
+; RV64XVENTANACONDOPS-NEXT:    ld ra, 24(sp) # 8-byte Folded Reload
+; RV64XVENTANACONDOPS-NEXT:    ld s0, 16(sp) # 8-byte Folded Reload
+; RV64XVENTANACONDOPS-NEXT:    ld s1, 8(sp) # 8-byte Folded Reload
+; RV64XVENTANACONDOPS-NEXT:    addi sp, sp, 32
+; RV64XVENTANACONDOPS-NEXT:    ret
+;
+; RV64XTHEADCONDMOV-LABEL: sextw_removal_maskcn:
+; RV64XTHEADCONDMOV:       # %bb.0: # %bb
+; RV64XTHEADCONDMOV-NEXT:    addi sp, sp, -32
+; RV64XTHEADCONDMOV-NEXT:    sd ra, 24(sp) # 8-byte Folded Spill
+; RV64XTHEADCONDMOV-NEXT:    sd s0, 16(sp) # 8-byte Folded Spill
+; RV64XTHEADCONDMOV-NEXT:    sd s1, 8(sp) # 8-byte Folded Spill
+; RV64XTHEADCONDMOV-NEXT:    mv s0, a2
+; RV64XTHEADCONDMOV-NEXT:    mv s1, a1
+; RV64XTHEADCONDMOV-NEXT:    andi a0, a0, 1
+; RV64XTHEADCONDMOV-NEXT:    th.mvnez s1, zero, a0
+; RV64XTHEADCONDMOV-NEXT:  .LBB55_1: # %bb2
+; RV64XTHEADCONDMOV-NEXT:    # =>This Inner Loop Header: Depth=1
+; RV64XTHEADCONDMOV-NEXT:    sext.w a0, s1
+; RV64XTHEADCONDMOV-NEXT:    call bar at plt
+; RV64XTHEADCONDMOV-NEXT:    sllw s1, s1, s0
+; RV64XTHEADCONDMOV-NEXT:    bnez a0, .LBB55_1
+; RV64XTHEADCONDMOV-NEXT:  # %bb.2: # %bb7
+; RV64XTHEADCONDMOV-NEXT:    ld ra, 24(sp) # 8-byte Folded Reload
+; RV64XTHEADCONDMOV-NEXT:    ld s0, 16(sp) # 8-byte Folded Reload
+; RV64XTHEADCONDMOV-NEXT:    ld s1, 8(sp) # 8-byte Folded Reload
+; RV64XTHEADCONDMOV-NEXT:    addi sp, sp, 32
+; RV64XTHEADCONDMOV-NEXT:    ret
+bb:
+  %i = select i1 %c, i32 0, i32 %arg
+  br label %bb2
+
+bb2:                                              ; preds = %bb2, %bb
+  %i3 = phi i32 [ %i, %bb ], [ %i5, %bb2 ]
+  %i4 = tail call signext i32 @bar(i32 signext %i3)
+  %i5 = shl i32 %i3, %arg1
+  %i6 = icmp eq i32 %i4, 0
+  br i1 %i6, label %bb7, label %bb2
+
+bb7:                                              ; preds = %bb2
+  ret void
+}

diff  --git a/llvm/test/CodeGen/RISCV/xventanacondops.ll b/llvm/test/CodeGen/RISCV/xventanacondops.ll
deleted file mode 100644
index c00a7d2e9b7ca..0000000000000
--- a/llvm/test/CodeGen/RISCV/xventanacondops.ll
+++ /dev/null
@@ -1,687 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc -mtriple=riscv64 -mattr=+xventanacondops < %s | FileCheck %s
-
-define i64 @zero1(i64 %rs1, i1 zeroext %rc) {
-; CHECK-LABEL: zero1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a0, a1
-; CHECK-NEXT:    ret
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2(i64 %rs1, i1 zeroext %rc) {
-; CHECK-LABEL: zero2:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a0, a1
-; CHECK-NEXT:    ret
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @add1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: add1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    add a0, a1, a0
-; CHECK-NEXT:    ret
-  %add = add i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %add, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @add2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: add2:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    add a0, a2, a0
-; CHECK-NEXT:    ret
-  %add = add i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %add, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @add3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: add3:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    add a0, a1, a0
-; CHECK-NEXT:    ret
-  %add = add i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs1, i64 %add
-  ret i64 %sel
-}
-
-define i64 @add4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: add4:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    add a0, a2, a0
-; CHECK-NEXT:    ret
-  %add = add i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs2, i64 %add
-  ret i64 %sel
-}
-
-define i64 @sub1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: sub1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    sub a0, a1, a0
-; CHECK-NEXT:    ret
-  %sub = sub i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %sub, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @sub2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: sub2:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    sub a0, a1, a0
-; CHECK-NEXT:    ret
-  %sub = sub i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs1, i64 %sub
-  ret i64 %sel
-}
-
-define i64 @or1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: or1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a1, a0
-; CHECK-NEXT:    ret
-  %or = or i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %or, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @or2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: or2:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    or a0, a2, a0
-; CHECK-NEXT:    ret
-  %or = or i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %or, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @or3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: or3:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    or a0, a1, a0
-; CHECK-NEXT:    ret
-  %or = or i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs1, i64 %or
-  ret i64 %sel
-}
-
-define i64 @or4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: or4:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    or a0, a2, a0
-; CHECK-NEXT:    ret
-  %or = or i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs2, i64 %or
-  ret i64 %sel
-}
-
-define i64 @xor1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: xor1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    xor a0, a1, a0
-; CHECK-NEXT:    ret
-  %xor = xor i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %xor, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @xor2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: xor2:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    xor a0, a2, a0
-; CHECK-NEXT:    ret
-  %xor = xor i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %xor, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @xor3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: xor3:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    xor a0, a1, a0
-; CHECK-NEXT:    ret
-  %xor = xor i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs1, i64 %xor
-  ret i64 %sel
-}
-
-define i64 @xor4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: xor4:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    xor a0, a2, a0
-; CHECK-NEXT:    ret
-  %xor = xor i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs2, i64 %xor
-  ret i64 %sel
-}
-
-define i64 @and1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: and1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    and a1, a1, a2
-; CHECK-NEXT:    or a0, a1, a0
-; CHECK-NEXT:    ret
-  %and = and i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %and, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @and2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: and2:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    and a1, a2, a1
-; CHECK-NEXT:    or a0, a1, a0
-; CHECK-NEXT:    ret
-  %and = and i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %and, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @and3(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: and3:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    and a1, a1, a2
-; CHECK-NEXT:    or a0, a1, a0
-; CHECK-NEXT:    ret
-  %and = and i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs1, i64 %and
-  ret i64 %sel
-}
-
-define i64 @and4(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: and4:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    and a1, a2, a1
-; CHECK-NEXT:    or a0, a1, a0
-; CHECK-NEXT:    ret
-  %and = and i64 %rs1, %rs2
-  %sel = select i1 %rc, i64 %rs2, i64 %and
-  ret i64 %sel
-}
-
-define i64 @basic(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: basic:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a2, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    or a0, a0, a2
-; CHECK-NEXT:    ret
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @seteq(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: seteq:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xor a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a1, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a3, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setne(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setne:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xor a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a1, a3, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setgt(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setgt:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    slt a0, a1, a0
-; CHECK-NEXT:    vt.maskcn a1, a3, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp sgt i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setge(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setge:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    slt a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a1, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a3, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp sge i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setlt(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setlt:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    slt a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a1, a3, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp slt i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setle(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setle:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    slt a0, a1, a0
-; CHECK-NEXT:    vt.maskcn a1, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a3, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp sle i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setugt(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setugt:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    sltu a0, a1, a0
-; CHECK-NEXT:    vt.maskcn a1, a3, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp ugt i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setuge(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setuge:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    sltu a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a1, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a3, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp uge i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setult(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setult:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    sltu a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a1, a3, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp ult i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setule(i64 %a, i64 %b, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setule:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    sltu a0, a1, a0
-; CHECK-NEXT:    vt.maskcn a1, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a3, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp ule i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @seteq_zero(i64 %a, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: seteq_zero:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a1, a1, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, 0
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setne_zero(i64 %a, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setne_zero:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a2, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    or a0, a0, a2
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, 0
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @seteq_constant(i64 %a, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: seteq_constant:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    addi a0, a0, -123
-; CHECK-NEXT:    vt.maskcn a1, a1, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, 123
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setne_constant(i64 %a, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setne_constant:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    addi a0, a0, -456
-; CHECK-NEXT:    vt.maskcn a2, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    or a0, a0, a2
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, 456
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @seteq_neg2048(i64 %a, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: seteq_neg2048:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xori a0, a0, -2048
-; CHECK-NEXT:    vt.maskcn a1, a1, a0
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    or a0, a0, a1
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, -2048
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @setne_neg2048(i64 %a, i64 %rs1, i64 %rs2) {
-; CHECK-LABEL: setne_neg2048:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xori a0, a0, -2048
-; CHECK-NEXT:    vt.maskcn a2, a2, a0
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    or a0, a0, a2
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, -2048
-  %sel = select i1 %rc, i64 %rs1, i64 %rs2
-  ret i64 %sel
-}
-
-define i64 @zero1_seteq(i64 %a, i64 %b, i64 %rs1) {
-; CHECK-LABEL: zero1_seteq:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xor a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_seteq(i64 %a, i64 %b, i64 %rs1) {
-; CHECK-LABEL: zero2_seteq:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xor a0, a0, a1
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, %b
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_setne(i64 %a, i64 %b, i64 %rs1) {
-; CHECK-LABEL: zero1_setne:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xor a0, a0, a1
-; CHECK-NEXT:    vt.maskc a0, a2, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, %b
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_setne(i64 %a, i64 %b, i64 %rs1) {
-; CHECK-LABEL: zero2_setne:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xor a0, a0, a1
-; CHECK-NEXT:    vt.maskcn a0, a2, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, %b
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_seteq_zero(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero1_seteq_zero:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, 0
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_seteq_zero(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero2_seteq_zero:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, 0
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_setne_zero(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero1_setne_zero:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, 0
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_setne_zero(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero2_setne_zero:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, 0
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_seteq_constant(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero1_seteq_constant:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    addi a0, a0, 231
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, -231
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_seteq_constant(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero2_seteq_constant:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    addi a0, a0, -546
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, 546
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_setne_constant(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero1_setne_constant:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    addi a0, a0, -321
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, 321
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_setne_constant(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero2_setne_constant:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    addi a0, a0, 654
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, -654
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_seteq_neg2048(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero1_seteq_neg2048:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xori a0, a0, -2048
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, -2048
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_seteq_neg2048(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero2_seteq_neg2048:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xori a0, a0, -2048
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp eq i64 %a, -2048
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-define i64 @zero1_setne_neg2048(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero1_setne_neg2048:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xori a0, a0, -2048
-; CHECK-NEXT:    vt.maskc a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, -2048
-  %sel = select i1 %rc, i64 %rs1, i64 0
-  ret i64 %sel
-}
-
-define i64 @zero2_setne_neg2048(i64 %a, i64 %rs1) {
-; CHECK-LABEL: zero2_setne_neg2048:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    xori a0, a0, -2048
-; CHECK-NEXT:    vt.maskcn a0, a1, a0
-; CHECK-NEXT:    ret
-  %rc = icmp ne i64 %a, -2048
-  %sel = select i1 %rc, i64 0, i64 %rs1
-  ret i64 %sel
-}
-
-; Test that we are able to convert the sext.w int he loop to mv.
-define void @sextw_removal_maskc(i1 %c, i32 signext %arg, i32 signext %arg1) nounwind {
-; CHECK-LABEL: sextw_removal_maskc:
-; CHECK:       # %bb.0: # %bb
-; CHECK-NEXT:    addi sp, sp, -32
-; CHECK-NEXT:    sd ra, 24(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    sd s0, 16(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    sd s1, 8(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    mv s0, a2
-; CHECK-NEXT:    andi a0, a0, 1
-; CHECK-NEXT:    vt.maskc s1, a1, a0
-; CHECK-NEXT:  .LBB53_1: # %bb2
-; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT:    mv a0, s1
-; CHECK-NEXT:    call bar at plt
-; CHECK-NEXT:    sllw s1, s1, s0
-; CHECK-NEXT:    bnez a0, .LBB53_1
-; CHECK-NEXT:  # %bb.2: # %bb7
-; CHECK-NEXT:    ld ra, 24(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    ld s0, 16(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    ld s1, 8(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    addi sp, sp, 32
-; CHECK-NEXT:    ret
-bb:
-  %i = select i1 %c, i32 %arg, i32 0
-  br label %bb2
-
-bb2:                                              ; preds = %bb2, %bb
-  %i3 = phi i32 [ %i, %bb ], [ %i5, %bb2 ]
-  %i4 = tail call signext i32 @bar(i32 signext %i3)
-  %i5 = shl i32 %i3, %arg1
-  %i6 = icmp eq i32 %i4, 0
-  br i1 %i6, label %bb7, label %bb2
-
-bb7:                                              ; preds = %bb2
-  ret void
-}
-declare signext i32 @bar(i32 signext)
-
-define void @sextw_removal_maskcn(i1 %c, i32 signext %arg, i32 signext %arg1) nounwind {
-; CHECK-LABEL: sextw_removal_maskcn:
-; CHECK:       # %bb.0: # %bb
-; CHECK-NEXT:    addi sp, sp, -32
-; CHECK-NEXT:    sd ra, 24(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    sd s0, 16(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    sd s1, 8(sp) # 8-byte Folded Spill
-; CHECK-NEXT:    mv s0, a2
-; CHECK-NEXT:    andi a0, a0, 1
-; CHECK-NEXT:    vt.maskcn s1, a1, a0
-; CHECK-NEXT:  .LBB54_1: # %bb2
-; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT:    mv a0, s1
-; CHECK-NEXT:    call bar at plt
-; CHECK-NEXT:    sllw s1, s1, s0
-; CHECK-NEXT:    bnez a0, .LBB54_1
-; CHECK-NEXT:  # %bb.2: # %bb7
-; CHECK-NEXT:    ld ra, 24(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    ld s0, 16(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    ld s1, 8(sp) # 8-byte Folded Reload
-; CHECK-NEXT:    addi sp, sp, 32
-; CHECK-NEXT:    ret
-bb:
-  %i = select i1 %c, i32 0, i32 %arg
-  br label %bb2
-
-bb2:                                              ; preds = %bb2, %bb
-  %i3 = phi i32 [ %i, %bb ], [ %i5, %bb2 ]
-  %i4 = tail call signext i32 @bar(i32 signext %i3)
-  %i5 = shl i32 %i3, %arg1
-  %i6 = icmp eq i32 %i4, 0
-  br i1 %i6, label %bb7, label %bb2
-
-bb7:                                              ; preds = %bb2
-  ret void
-}

diff  --git a/llvm/test/MC/RISCV/xtheadcondmov-invalid.s b/llvm/test/MC/RISCV/xtheadcondmov-invalid.s
new file mode 100644
index 0000000000000..7a7a3b6f8e2a6
--- /dev/null
+++ b/llvm/test/MC/RISCV/xtheadcondmov-invalid.s
@@ -0,0 +1,9 @@
+# RUN: not llvm-mc -triple riscv32 -mattr=+xtheadcondmov < %s 2>&1 | FileCheck %s
+# RUN: not llvm-mc -triple riscv64 -mattr=+xtheadcondmov < %s 2>&1 | FileCheck %s
+
+th.mveqz a0,a1        # CHECK: :[[@LINE]]:1: error: too few operands for instruction
+th.mveqz a0,a1,a2,a3  # CHECK: :[[@LINE]]:19: error: invalid operand for instruction
+th.mveqz a0,a1,1      # CHECK: :[[@LINE]]:16: error: invalid operand for instruction
+th.mvnez a0,a1        # CHECK: :[[@LINE]]:1: error: too few operands for instruction
+th.mvnez a0,a1,a2,a3  # CHECK: :[[@LINE]]:19: error: invalid operand for instruction
+th.mvnez a0,a1,1      # CHECK: :[[@LINE]]:16: error: invalid operand for instruction

diff  --git a/llvm/test/MC/RISCV/xtheadcondmov-valid.s b/llvm/test/MC/RISCV/xtheadcondmov-valid.s
new file mode 100644
index 0000000000000..d7dd5aa357362
--- /dev/null
+++ b/llvm/test/MC/RISCV/xtheadcondmov-valid.s
@@ -0,0 +1,18 @@
+# RUN: llvm-mc %s -triple=riscv32 -mattr=+xtheadcondmov -show-encoding \
+# RUN:     | FileCheck -check-prefixes=CHECK-ASM,CHECK-ASM-AND-OBJ %s
+# RUN: llvm-mc -filetype=obj -triple=riscv32 -mattr=+xtheadcondmov < %s \
+# RUN:     | llvm-objdump --mattr=+xtheadcondmov -d -r - \
+# RUN:     | FileCheck --check-prefix=CHECK-ASM-AND-OBJ %s
+# RUN: llvm-mc %s -triple=riscv64 -mattr=+xtheadcondmov -show-encoding \
+# RUN:     | FileCheck -check-prefixes=CHECK-ASM,CHECK-ASM-AND-OBJ %s
+# RUN: llvm-mc -filetype=obj -triple=riscv64 -mattr=+xtheadcondmov < %s \
+# RUN:     | llvm-objdump --mattr=+xtheadcondmov -d -r - \
+# RUN:     | FileCheck --check-prefix=CHECK-ASM-AND-OBJ %s
+
+# CHECK-ASM-AND-OBJ: th.mveqz a0, a1, a2
+# CHECK-ASM: encoding: [0x0b,0x95,0xc5,0x40]
+th.mveqz a0,a1,a2
+
+# CHECK-ASM-AND-OBJ: th.mvnez a0, a1, a2
+# CHECK-ASM: encoding: [0x0b,0x95,0xc5,0x42]
+th.mvnez a0,a1,a2


        


More information about the llvm-commits mailing list