[llvm] 9cb8f4d - [ARM] Add a tail-predication loop predicate register

David Green via llvm-commits llvm-commits at lists.llvm.org
Thu Sep 2 05:43:14 PDT 2021


Author: David Green
Date: 2021-09-02T13:42:58+01:00
New Revision: 9cb8f4d1ad651e2be41985ccb3d6efe0953a8101

URL: https://github.com/llvm/llvm-project/commit/9cb8f4d1ad651e2be41985ccb3d6efe0953a8101
DIFF: https://github.com/llvm/llvm-project/commit/9cb8f4d1ad651e2be41985ccb3d6efe0953a8101.diff

LOG: [ARM] Add a tail-predication loop predicate register

The semantics of tail predication loops means that the value of LR as an
instruction is executed determines the predicate. In other words:

mov r3, #3
DLSTP lr, r3        // Start tail predication, lr==3
VADD.s32 q0, q1, q2 // Lanes 0,1 and 2 are updated in q0.
mov lr, #1
VADD.s32 q0, q1, q2 // Only first lane is updated.

This means that the value of lr cannot be spilled and re-used in tail
predication regions without potentially altering the behaviour of the
program. More lanes than required could be stored, for example, and in
the case of a gather those lanes might not have been setup, leading to
alignment exceptions.

This patch adds a new lr predicate operand to MVE instructions in order
to keep a reference to the lr that they use as a tail predicate. It will
usually hold the zeroreg meaning not predicated, being set to the LR phi
value in the MVETPAndVPTOptimisationsPass. This will prevent it from
being spilled anywhere that it needs to be used.

A lot of tests needed updating.

Differential Revision: https://reviews.llvm.org/D107638

Added: 
    

Modified: 
    llvm/lib/CodeGen/MachineVerifier.cpp
    llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
    llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
    llvm/lib/Target/ARM/ARMISelLowering.cpp
    llvm/lib/Target/ARM/ARMInstrCDE.td
    llvm/lib/Target/ARM/ARMInstrFormats.td
    llvm/lib/Target/ARM/ARMInstrMVE.td
    llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
    llvm/lib/Target/ARM/ARMLowOverheadLoops.cpp
    llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
    llvm/lib/Target/ARM/Disassembler/ARMDisassembler.cpp
    llvm/lib/Target/ARM/MVETPAndVPTOptimisationsPass.cpp
    llvm/test/CodeGen/ARM/machine-outliner-unoutlinable.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/add_reduce.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/begin-vpt-without-inst.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/cmplx_cong.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/count_dominates_start.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/ctlz-non-zeros.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/disjoint-vcmp.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-ignore-vctp.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-remove-loop-update.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/emptyblock.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/extract-element.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-16.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-32.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-8.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-1.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-2.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-3.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-1.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-2.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/invariant-qreg.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain-store.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-itercount.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-mov.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-random.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp-reordered.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-vcmp.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/livereg-no-loop-def.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/lstp-insertion-position.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/matrix.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-after-dlstp.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-lr-terminator.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-def-before-start.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-start-after-def.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-block-cond-iter-count.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-cond-iter-count.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/multiple-do-loops.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/no-vpsel-liveout.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-load.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-store.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-invariant.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-liveout.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/reductions-vpt-liveout.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/remove-elem-moves.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/safe-retaining.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-debug.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-vpt-debug.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/subreg-liveness.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/unpredicated-max.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/unrolled-and-vector.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/unsafe-retaining.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vaddv.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vcmp-vpst-combination-across-blocks.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-add-operand-liveout.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt-2.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subi3.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri12.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp16-reduce.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vector_spill_in_loop.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmaxmin_vpred_r.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmldava_in_vpt.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-blocks.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/wls-search-pred.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/wlstp.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-liveout-lsr-shift.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-opcode-liveout.mir
    llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-operand-liveout.mir
    llvm/test/CodeGen/Thumb2/mve-gatherscatter-mmo.ll
    llvm/test/CodeGen/Thumb2/mve-postinc-distribute.mir
    llvm/test/CodeGen/Thumb2/mve-stacksplot.mir
    llvm/test/CodeGen/Thumb2/mve-tp-loop.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-debug.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-kill.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-nots.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-optimisations.mir
    llvm/test/CodeGen/Thumb2/mve-vpt-preuse.mir
    llvm/test/CodeGen/Thumb2/mve-wls-block-placement.mir
    llvm/test/CodeGen/Thumb2/phi_prevent_copy.mir

Removed: 
    


################################################################################
diff  --git a/llvm/lib/CodeGen/MachineVerifier.cpp b/llvm/lib/CodeGen/MachineVerifier.cpp
index dee5580580ee3..a553b677237b4 100644
--- a/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -1653,6 +1653,8 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
       report("Unspillable Terminator does not define a reg", MI);
     Register Def = MI->getOperand(0).getReg();
     if (Def.isVirtual() &&
+        !MF->getProperties().hasProperty(
+            MachineFunctionProperties::Property::NoPHIs) &&
         std::distance(MRI->use_nodbg_begin(Def), MRI->use_nodbg_end()) > 1)
       report("Unspillable Terminator expected to have at most one use!", MI);
   }

diff  --git a/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp b/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
index e09fe7fa0597c..92f7cde3c4c0a 100644
--- a/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
+++ b/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
@@ -867,6 +867,7 @@ void ARMBaseInstrInfo::copyToCPSR(MachineBasicBlock &MBB,
 void llvm::addUnpredicatedMveVpredNOp(MachineInstrBuilder &MIB) {
   MIB.addImm(ARMVCC::None);
   MIB.addReg(0);
+  MIB.addReg(0); // tp_reg
 }
 
 void llvm::addUnpredicatedMveVpredROp(MachineInstrBuilder &MIB,
@@ -878,6 +879,7 @@ void llvm::addUnpredicatedMveVpredROp(MachineInstrBuilder &MIB,
 void llvm::addPredicatedMveVpredNOp(MachineInstrBuilder &MIB, unsigned Cond) {
   MIB.addImm(Cond);
   MIB.addReg(ARM::VPR, RegState::Implicit);
+  MIB.addReg(0); // tp_reg
 }
 
 void llvm::addPredicatedMveVpredROp(MachineInstrBuilder &MIB,

diff  --git a/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp b/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
index 372f79660d03e..a7469a9f53d3a 100644
--- a/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
+++ b/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
@@ -1822,8 +1822,11 @@ bool ARMDAGToDAGISel::tryMVEIndexedLoad(SDNode *N) {
   else
     return false;
 
-  SDValue Ops[] = {Base, NewOffset,
-                   CurDAG->getTargetConstant(Pred, SDLoc(N), MVT::i32), PredReg,
+  SDValue Ops[] = {Base,
+                   NewOffset,
+                   CurDAG->getTargetConstant(Pred, SDLoc(N), MVT::i32),
+                   PredReg,
+                   CurDAG->getRegister(0, MVT::i32), // tp_reg
                    Chain};
   SDNode *New = CurDAG->getMachineNode(Opcode, SDLoc(N), MVT::i32,
                                        N->getValueType(0), MVT::Other, Ops);
@@ -2529,6 +2532,7 @@ void ARMDAGToDAGISel::AddMVEPredicateToOps(SDValueVector &Ops, SDLoc Loc,
                                            SDValue PredicateMask) {
   Ops.push_back(CurDAG->getTargetConstant(ARMVCC::Then, Loc, MVT::i32));
   Ops.push_back(PredicateMask);
+  Ops.push_back(CurDAG->getRegister(0, MVT::i32)); // tp_reg
 }
 
 template <typename SDValueVector>
@@ -2537,6 +2541,7 @@ void ARMDAGToDAGISel::AddMVEPredicateToOps(SDValueVector &Ops, SDLoc Loc,
                                            SDValue Inactive) {
   Ops.push_back(CurDAG->getTargetConstant(ARMVCC::Then, Loc, MVT::i32));
   Ops.push_back(PredicateMask);
+  Ops.push_back(CurDAG->getRegister(0, MVT::i32)); // tp_reg
   Ops.push_back(Inactive);
 }
 
@@ -2544,6 +2549,7 @@ template <typename SDValueVector>
 void ARMDAGToDAGISel::AddEmptyMVEPredicateToOps(SDValueVector &Ops, SDLoc Loc) {
   Ops.push_back(CurDAG->getTargetConstant(ARMVCC::None, Loc, MVT::i32));
   Ops.push_back(CurDAG->getRegister(0, MVT::i32));
+  Ops.push_back(CurDAG->getRegister(0, MVT::i32)); // tp_reg
 }
 
 template <typename SDValueVector>
@@ -2551,6 +2557,7 @@ void ARMDAGToDAGISel::AddEmptyMVEPredicateToOps(SDValueVector &Ops, SDLoc Loc,
                                                 EVT InactiveTy) {
   Ops.push_back(CurDAG->getTargetConstant(ARMVCC::None, Loc, MVT::i32));
   Ops.push_back(CurDAG->getRegister(0, MVT::i32));
+  Ops.push_back(CurDAG->getRegister(0, MVT::i32)); // tp_reg
   Ops.push_back(SDValue(
       CurDAG->getMachineNode(TargetOpcode::IMPLICIT_DEF, Loc, InactiveTy), 0));
 }

diff  --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 7eaa3fc4073fc..efdae2c7117df 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -11542,6 +11542,7 @@ static void genTPLoopBody(MachineBasicBlock *TpLoopBody,
   BuildMI(TpLoopBody, Dl, TII->get(ARM::MVE_VCTP8), VccrReg)
       .addUse(PredCounterPhiReg)
       .addImm(ARMVCC::None)
+      .addReg(0)
       .addReg(0);
 
   BuildMI(TpLoopBody, Dl, TII->get(ARM::t2SUBri), RemainingElementsReg)
@@ -11560,7 +11561,8 @@ static void genTPLoopBody(MachineBasicBlock *TpLoopBody,
         .addReg(SrcPhiReg)
         .addImm(16)
         .addImm(ARMVCC::Then)
-        .addUse(VccrReg);
+        .addUse(VccrReg)
+        .addReg(0);
   } else
     SrcValueReg = OpSrcReg;
 
@@ -11570,7 +11572,8 @@ static void genTPLoopBody(MachineBasicBlock *TpLoopBody,
       .addReg(DestPhiReg)
       .addImm(16)
       .addImm(ARMVCC::Then)
-      .addUse(VccrReg);
+      .addUse(VccrReg)
+      .addReg(0);
 
   // Add the pseudoInstrs for decrementing the loop counter and marking the
   // end:t2DoLoopDec and t2DoLoopEnd

diff  --git a/llvm/lib/Target/ARM/ARMInstrCDE.td b/llvm/lib/Target/ARM/ARMInstrCDE.td
index 0e97668e2e013..54e27a6be5583 100644
--- a/llvm/lib/Target/ARM/ARMInstrCDE.td
+++ b/llvm/lib/Target/ARM/ARMInstrCDE.td
@@ -612,14 +612,14 @@ multiclass VCXPredicatedPat_m<MVEVectorVTInfo VTI> {
                                     (VTI.Vec MQPR:$inactive), timm:$imm,
                                     (VTI.Pred VCCR:$pred))),
             (VTI.Vec (CDE_VCX1_vec p_imm:$coproc, imm_12b:$imm, ARMVCCThen,
-                                    (VTI.Pred VCCR:$pred),
+                                    (VTI.Pred VCCR:$pred), zero_reg,
                                     (VTI.Vec MQPR:$inactive)))>;
   def : Pat<(VTI.Vec (int_arm_cde_vcx1qa_predicated timm:$coproc,
                                     (VTI.Vec MQPR:$acc), timm:$imm,
                                     (VTI.Pred VCCR:$pred))),
             (VTI.Vec (CDE_VCX1A_vec p_imm:$coproc, (VTI.Vec MQPR:$acc),
                                     imm_12b:$imm, ARMVCCThen,
-                                    (VTI.Pred VCCR:$pred)))>;
+                                    (VTI.Pred VCCR:$pred), zero_reg))>;
 
   def : Pat<(VTI.Vec (int_arm_cde_vcx2q_predicated timm:$coproc,
                                     (VTI.Vec MQPR:$inactive),
@@ -627,7 +627,7 @@ multiclass VCXPredicatedPat_m<MVEVectorVTInfo VTI> {
                                     (VTI.Pred VCCR:$pred))),
             (VTI.Vec (CDE_VCX2_vec p_imm:$coproc, (v16i8 MQPR:$n),
                                     imm_7b:$imm, ARMVCCThen,
-                                    (VTI.Pred VCCR:$pred),
+                                    (VTI.Pred VCCR:$pred), zero_reg,
                                     (VTI.Vec MQPR:$inactive)))>;
   def : Pat<(VTI.Vec (int_arm_cde_vcx2qa_predicated timm:$coproc,
                                     (VTI.Vec MQPR:$acc),
@@ -635,7 +635,7 @@ multiclass VCXPredicatedPat_m<MVEVectorVTInfo VTI> {
                                     (VTI.Pred VCCR:$pred))),
             (VTI.Vec (CDE_VCX2A_vec p_imm:$coproc, (VTI.Vec MQPR:$acc),
                                     (v16i8 MQPR:$n), timm:$imm, ARMVCCThen,
-                                    (VTI.Pred VCCR:$pred)))>;
+                                    (VTI.Pred VCCR:$pred), zero_reg))>;
 
   def : Pat<(VTI.Vec (int_arm_cde_vcx3q_predicated timm:$coproc,
                                     (VTI.Vec MQPR:$inactive),
@@ -645,7 +645,7 @@ multiclass VCXPredicatedPat_m<MVEVectorVTInfo VTI> {
             (VTI.Vec (CDE_VCX3_vec p_imm:$coproc, (v16i8 MQPR:$n),
                                     (v16i8 MQPR:$m),
                                     imm_4b:$imm, ARMVCCThen,
-                                    (VTI.Pred VCCR:$pred),
+                                    (VTI.Pred VCCR:$pred), zero_reg,
                                     (VTI.Vec MQPR:$inactive)))>;
   def : Pat<(VTI.Vec (int_arm_cde_vcx3qa_predicated timm:$coproc,
                                     (VTI.Vec MQPR:$acc),
@@ -654,7 +654,7 @@ multiclass VCXPredicatedPat_m<MVEVectorVTInfo VTI> {
             (VTI.Vec (CDE_VCX3A_vec p_imm:$coproc, (VTI.Vec MQPR:$acc),
                                     (v16i8 MQPR:$n), (v16i8 MQPR:$m),
                                     imm_4b:$imm, ARMVCCThen,
-                                    (VTI.Pred VCCR:$pred)))>;
+                                    (VTI.Pred VCCR:$pred), zero_reg))>;
 }
 
 let Predicates = [HasCDE, HasMVEInt] in

diff  --git a/llvm/lib/Target/ARM/ARMInstrFormats.td b/llvm/lib/Target/ARM/ARMInstrFormats.td
index a881a59d62817..b00f9747acaf6 100644
--- a/llvm/lib/Target/ARM/ARMInstrFormats.td
+++ b/llvm/lib/Target/ARM/ARMInstrFormats.td
@@ -249,10 +249,10 @@ def VPTPredROperand : AsmOperandClass {
 
 // Base class for both kinds of vpred.
 class vpred_ops<dag extra_op, dag extra_mi> : OperandWithDefaultOps<OtherVT,
-            !con((ops (i32 0), (i32 zero_reg)), extra_op)> {
+            !con((ops (i32 0), (i32 zero_reg), (i32 zero_reg)), extra_op)> {
   let PrintMethod = "printVPTPredicateOperand";
   let OperandNamespace = "ARM";
-  let MIOperandInfo = !con((ops i32imm:$cond, VCCR:$cond_reg), extra_mi);
+  let MIOperandInfo = !con((ops i32imm:$cond, VCCR:$cond_reg, GPRlr:$tp_reg), extra_mi);
 
   // For convenience, we provide a string value that can be appended
   // to the constraints string. It's empty for vpred_n, and for

diff  --git a/llvm/lib/Target/ARM/ARMInstrMVE.td b/llvm/lib/Target/ARM/ARMInstrMVE.td
index 6bfa852766e3f..6e71376510d5f 100644
--- a/llvm/lib/Target/ARM/ARMInstrMVE.td
+++ b/llvm/lib/Target/ARM/ARMInstrMVE.td
@@ -332,7 +332,7 @@ multiclass MVE_TwoOpPattern<MVEVectorVTInfo VTI, SDPatternOperator Op, Intrinsic
                                              (VTI.Vec MQPR:$Qn))),
                                 (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                              ARMVCCThen, (VTI.Pred VCCR:$mask),
+                              ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                               (VTI.Vec MQPR:$inactive)))>;
 
     // Optionally with the select folded through the op
@@ -341,7 +341,7 @@ multiclass MVE_TwoOpPattern<MVEVectorVTInfo VTI, SDPatternOperator Op, Intrinsic
                                              (VTI.Vec MQPR:$Qn),
                                              (VTI.Vec IdentityVec))))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                              ARMVCCThen, (VTI.Pred VCCR:$mask),
+                              ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                               (VTI.Vec MQPR:$Qm)))>;
   }
 
@@ -350,7 +350,7 @@ multiclass MVE_TwoOpPattern<MVEVectorVTInfo VTI, SDPatternOperator Op, Intrinsic
                           PredOperands,
                           (? (VTI.Pred VCCR:$mask), (VTI.Vec MQPR:$inactive)))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                            ARMVCCThen, (VTI.Pred VCCR:$mask),
+                            ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                             (VTI.Vec MQPR:$inactive)))>;
 }
 
@@ -368,7 +368,7 @@ multiclass MVE_TwoOpPatternDup<MVEVectorVTInfo VTI, SDPatternOperator Op, Intrin
                                              (VTI.Vec (ARMvdup rGPR:$Rn)))),
                                 (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), rGPR:$Rn,
-                              ARMVCCThen, (VTI.Pred VCCR:$mask),
+                              ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                               (VTI.Vec MQPR:$inactive)))>;
 
     // Optionally with the select folded through the op
@@ -377,7 +377,7 @@ multiclass MVE_TwoOpPatternDup<MVEVectorVTInfo VTI, SDPatternOperator Op, Intrin
                                              (ARMvdup rGPR:$Rn),
                                              (VTI.Vec IdentityVec))))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), rGPR:$Rn,
-                              ARMVCCThen, (VTI.Pred VCCR:$mask),
+                              ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                               (VTI.Vec MQPR:$Qm)))>;
   }
 
@@ -386,7 +386,7 @@ multiclass MVE_TwoOpPatternDup<MVEVectorVTInfo VTI, SDPatternOperator Op, Intrin
                           PredOperands,
                           (? (VTI.Pred VCCR:$mask), (VTI.Vec MQPR:$inactive)))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), rGPR:$Rn,
-                            ARMVCCThen, (VTI.Pred VCCR:$mask),
+                            ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                             (VTI.Vec MQPR:$inactive)))>;
 }
 
@@ -652,7 +652,7 @@ multiclass MVE_VABAV_m<MVEVectorVTInfo VTI> {
                          (VTI.Pred VCCR:$mask))),
               (i32 (Inst (i32 rGPR:$Rda_src),
                          (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
-                         ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                         ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
   }
 }
 
@@ -710,11 +710,11 @@ multiclass MVE_VADDV_A<MVEVectorVTInfo VTI> {
       def : Pat<(i32 (vecreduce_add (VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                                                       (VTI.Vec MQPR:$vec),
                                                       (VTI.Vec ARMimmAllZerosV))))),
-                (i32 (InstN $vec, ARMVCCThen, $pred))>;
+                (i32 (InstN $vec, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(i32 (ARMVADDVu (VTI.Vec MQPR:$vec))),
                 (i32 (InstN $vec))>;
       def : Pat<(i32 (ARMVADDVpu (VTI.Vec MQPR:$vec), (VTI.Pred VCCR:$pred))),
-                (i32 (InstN $vec, ARMVCCThen, $pred))>;
+                (i32 (InstN $vec, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(i32 (add (i32 (vecreduce_add (VTI.Vec MQPR:$vec))),
                           (i32 tGPREven:$acc))),
                 (i32 (InstA $acc, $vec))>;
@@ -722,13 +722,13 @@ multiclass MVE_VADDV_A<MVEVectorVTInfo VTI> {
                                                                 (VTI.Vec MQPR:$vec),
                                                                 (VTI.Vec ARMimmAllZerosV))))),
                           (i32 tGPREven:$acc))),
-                (i32 (InstA $acc, $vec, ARMVCCThen, $pred))>;
+                (i32 (InstA $acc, $vec, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(i32 (add (i32 (ARMVADDVu (VTI.Vec MQPR:$vec))),
                           (i32 tGPREven:$acc))),
                 (i32 (InstA $acc, $vec))>;
       def : Pat<(i32 (add (i32 (ARMVADDVpu (VTI.Vec MQPR:$vec), (VTI.Pred VCCR:$pred))),
                           (i32 tGPREven:$acc))),
-                (i32 (InstA $acc, $vec, ARMVCCThen, $pred))>;
+                (i32 (InstA $acc, $vec, ARMVCCThen, $pred, zero_reg))>;
     } else {
       def : Pat<(i32 (ARMVADDVs (VTI.Vec MQPR:$vec))),
                 (i32 (InstN $vec))>;
@@ -736,21 +736,21 @@ multiclass MVE_VADDV_A<MVEVectorVTInfo VTI> {
                           (i32 tGPREven:$acc))),
                 (i32 (InstA $acc, $vec))>;
       def : Pat<(i32 (ARMVADDVps (VTI.Vec MQPR:$vec), (VTI.Pred VCCR:$pred))),
-                (i32 (InstN $vec, ARMVCCThen, $pred))>;
+                (i32 (InstN $vec, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(i32 (add (i32 (ARMVADDVps (VTI.Vec MQPR:$vec), (VTI.Pred VCCR:$pred))),
                           (i32 tGPREven:$acc))),
-                (i32 (InstA $acc, $vec, ARMVCCThen, $pred))>;
+                (i32 (InstA $acc, $vec, ARMVCCThen, $pred, zero_reg))>;
     }
 
     def : Pat<(i32 (int_arm_mve_addv_predicated (VTI.Vec MQPR:$vec),
                                                 (i32 VTI.Unsigned),
                                                 (VTI.Pred VCCR:$pred))),
-              (i32 (InstN $vec, ARMVCCThen, $pred))>;
+              (i32 (InstN $vec, ARMVCCThen, $pred, zero_reg))>;
     def : Pat<(i32 (add (int_arm_mve_addv_predicated (VTI.Vec MQPR:$vec),
                                                      (i32 VTI.Unsigned),
                                                      (VTI.Pred VCCR:$pred)),
                         (i32 tGPREven:$acc))),
-              (i32 (InstA $acc, $vec, ARMVCCThen, $pred))>;
+              (i32 (InstA $acc, $vec, ARMVCCThen, $pred, zero_reg))>;
   }
 }
 
@@ -821,11 +821,11 @@ multiclass MVE_VADDLV_A<MVEVectorVTInfo VTI> {
     def : Pat<(ARMVADDLVA tGPREven:$acclo, tGPROdd:$acchi, (v4i32 MQPR:$vec)),
               (InstA tGPREven:$acclo, tGPROdd:$acchi, (v4i32 MQPR:$vec))>;
     def : Pat<(ARMVADDLVp (v4i32 MQPR:$vec), (VTI.Pred VCCR:$pred)),
-              (InstN (v4i32 MQPR:$vec), ARMVCCThen, (VTI.Pred VCCR:$pred))>;
+              (InstN (v4i32 MQPR:$vec), ARMVCCThen, (VTI.Pred VCCR:$pred), zero_reg)>;
     def : Pat<(ARMVADDLVAp tGPREven:$acclo, tGPROdd:$acchi, (v4i32 MQPR:$vec),
                            (VTI.Pred VCCR:$pred)),
               (InstA tGPREven:$acclo, tGPROdd:$acchi, (v4i32 MQPR:$vec),
-                     ARMVCCThen, (VTI.Pred VCCR:$pred))>;
+                     ARMVCCThen, (VTI.Pred VCCR:$pred), zero_reg)>;
   }
 }
 
@@ -876,7 +876,7 @@ multiclass MVE_VMINMAXNMV_p<string iname, bit notAbs, bit isMin,
                                    (VTI.Pred VCCR:$pred))),
            (COPY_TO_REGCLASS (Inst (COPY_TO_REGCLASS ScalarReg:$prev, rGPR),
                                    (VTI.Vec MQPR:$vec),
-                                   ARMVCCThen, (VTI.Pred VCCR:$pred)),
+                                   ARMVCCThen, (VTI.Pred VCCR:$pred), zero_reg),
                               ScalarReg)>;
   }
 }
@@ -931,7 +931,7 @@ multiclass MVE_VMINMAXV_p<string iname, bit notAbs, bit isMin,
               (i32 (Inst (i32 rGPR:$prev), (VTI.Vec MQPR:$vec)))>;
     def : Pat<(i32 !con(args, (pred_intr (VTI.Pred VCCR:$pred)))),
               (i32 (Inst (i32 rGPR:$prev), (VTI.Vec MQPR:$vec),
-                         ARMVCCThen, (VTI.Pred VCCR:$pred)))>;
+                         ARMVCCThen, (VTI.Pred VCCR:$pred), zero_reg))>;
   }
 }
 
@@ -1074,7 +1074,7 @@ multiclass MVE_VMLAMLSDAV_A<string iname, string x, MVEVectorVTInfo VTI,
                             (VTI.Pred VCCR:$mask))),
               (i32 (!cast<Instruction>(NAME # x # VTI.Suffix)
                             (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
 
     def : Pat<(i32 (int_arm_mve_vmldava
                             (i32 VTI.Unsigned),
@@ -1096,7 +1096,7 @@ multiclass MVE_VMLAMLSDAV_A<string iname, string x, MVEVectorVTInfo VTI,
               (i32 (!cast<Instruction>(NAME # "a" # x # VTI.Suffix)
                             (i32 tGPREven:$RdaSrc),
                             (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
   }
 }
 
@@ -1200,47 +1200,47 @@ let Predicates = [HasMVEInt] in {
   def : Pat<(i32 (vecreduce_add (vselect (v4i1 VCCR:$pred),
                                          (mul (v4i32 MQPR:$src1), (v4i32 MQPR:$src2)),
                                          (v4i32 ARMimmAllZerosV)))),
-            (i32 (MVE_VMLADAVu32 $src1, $src2, ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVu32 $src1, $src2, ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (vecreduce_add (vselect (v8i1 VCCR:$pred),
                                          (mul (v8i16 MQPR:$src1), (v8i16 MQPR:$src2)),
                                          (v8i16 ARMimmAllZerosV)))),
-            (i32 (MVE_VMLADAVu16 $src1, $src2, ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVu16 $src1, $src2, ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (ARMVMLAVps (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred))),
-            (i32 (MVE_VMLADAVs16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVs16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (ARMVMLAVpu (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred))),
-            (i32 (MVE_VMLADAVu16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVu16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (vecreduce_add (vselect (v16i1 VCCR:$pred),
                                          (mul (v16i8 MQPR:$src1), (v16i8 MQPR:$src2)),
                                          (v16i8 ARMimmAllZerosV)))),
-            (i32 (MVE_VMLADAVu8 $src1, $src2, ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVu8 $src1, $src2, ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (ARMVMLAVps (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), (v16i1 VCCR:$pred))),
-            (i32 (MVE_VMLADAVs8 (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVs8 (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (ARMVMLAVpu (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), (v16i1 VCCR:$pred))),
-            (i32 (MVE_VMLADAVu8 (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVu8 (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
 
   def : Pat<(i32 (add (i32 (vecreduce_add (vselect (v4i1 VCCR:$pred),
                                                    (mul (v4i32 MQPR:$src1), (v4i32 MQPR:$src2)),
                                                    (v4i32 ARMimmAllZerosV)))),
                       (i32 tGPREven:$src3))),
-            (i32 (MVE_VMLADAVau32 $src3, $src1, $src2, ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVau32 $src3, $src1, $src2, ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (add (i32 (vecreduce_add (vselect (v8i1 VCCR:$pred),
                                                    (mul (v8i16 MQPR:$src1), (v8i16 MQPR:$src2)),
                                                    (v8i16 ARMimmAllZerosV)))),
                       (i32 tGPREven:$src3))),
-            (i32 (MVE_VMLADAVau16 $src3, $src1, $src2, ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVau16 $src3, $src1, $src2, ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (add (ARMVMLAVps (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred)), tGPREven:$Rd)),
-            (i32 (MVE_VMLADAVas16 tGPREven:$Rd, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVas16 tGPREven:$Rd, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (add (ARMVMLAVpu (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred)), tGPREven:$Rd)),
-            (i32 (MVE_VMLADAVau16 tGPREven:$Rd, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVau16 tGPREven:$Rd, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (add (i32 (vecreduce_add (vselect (v16i1 VCCR:$pred),
                                                    (mul (v16i8 MQPR:$src1), (v16i8 MQPR:$src2)),
                                                    (v16i8 ARMimmAllZerosV)))),
                       (i32 tGPREven:$src3))),
-            (i32 (MVE_VMLADAVau8 $src3, $src1, $src2, ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVau8 $src3, $src1, $src2, ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (add (ARMVMLAVps (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), (v16i1 VCCR:$pred)), tGPREven:$Rd)),
-            (i32 (MVE_VMLADAVas8 tGPREven:$Rd, (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVas8 tGPREven:$Rd, (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
   def : Pat<(i32 (add (ARMVMLAVpu (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), (v16i1 VCCR:$pred)), tGPREven:$Rd)),
-            (i32 (MVE_VMLADAVau8 tGPREven:$Rd, (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred))>;
+            (i32 (MVE_VMLADAVau8 tGPREven:$Rd, (v16i8 MQPR:$val1), (v16i8 MQPR:$val2), ARMVCCThen, $pred, zero_reg))>;
 }
 
 // vmlav aliases vmladav
@@ -1363,22 +1363,22 @@ let Predicates = [HasMVEInt] in {
 
   // Predicated
   def : Pat<(ARMVMLALVps (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), (v4i1 VCCR:$pred)),
-            (MVE_VMLALDAVs32 (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVs32 (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
   def : Pat<(ARMVMLALVpu (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), (v4i1 VCCR:$pred)),
-            (MVE_VMLALDAVu32 (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVu32 (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
   def : Pat<(ARMVMLALVps (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred)),
-            (MVE_VMLALDAVs16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVs16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
   def : Pat<(ARMVMLALVpu (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred)),
-            (MVE_VMLALDAVu16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVu16 (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
 
   def : Pat<(ARMVMLALVAps tGPREven:$Rda, tGPROdd:$Rdb, (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), (v4i1 VCCR:$pred)),
-            (MVE_VMLALDAVas32 tGPREven:$Rda, tGPROdd:$Rdb, (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVas32 tGPREven:$Rda, tGPROdd:$Rdb, (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
   def : Pat<(ARMVMLALVApu tGPREven:$Rda, tGPROdd:$Rdb, (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), (v4i1 VCCR:$pred)),
-            (MVE_VMLALDAVau32 tGPREven:$Rda, tGPROdd:$Rdb, (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVau32 tGPREven:$Rda, tGPROdd:$Rdb, (v4i32 MQPR:$val1), (v4i32 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
   def : Pat<(ARMVMLALVAps tGPREven:$Rda, tGPROdd:$Rdb, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred)),
-            (MVE_VMLALDAVas16 tGPREven:$Rda, tGPROdd:$Rdb, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVas16 tGPREven:$Rda, tGPROdd:$Rdb, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
   def : Pat<(ARMVMLALVApu tGPREven:$Rda, tGPROdd:$Rdb, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), (v8i1 VCCR:$pred)),
-            (MVE_VMLALDAVau16 tGPREven:$Rda, tGPROdd:$Rdb, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred)>;
+            (MVE_VMLALDAVau16 tGPREven:$Rda, tGPROdd:$Rdb, (v8i16 MQPR:$val1), (v8i16 MQPR:$val2), ARMVCCThen, $pred, zero_reg)>;
 }
 
 // vmlalv aliases vmlaldav
@@ -1575,7 +1575,7 @@ multiclass MVE_VREV_basic_patterns<int revbits, list<MVEVectorVTInfo> VTIs,
     def : Pat<(VTI.Vec (int_arm_mve_vrev_predicated (VTI.Vec MQPR:$src),
                   revbits, (VTI.Pred VCCR:$pred), (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$src), ARMVCCThen,
-                  (VTI.Pred VCCR:$pred), (VTI.Vec MQPR:$inactive)))>;
+                  (VTI.Pred VCCR:$pred), zero_reg, (VTI.Vec MQPR:$inactive)))>;
   }
 }
 
@@ -1608,7 +1608,7 @@ let Predicates = [HasMVEInt] in {
     def : Pat<(VTI.Vec (int_arm_mve_mvn_predicated (VTI.Vec MQPR:$val1),
                        (VTI.Pred VCCR:$pred), (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (MVE_VMVN (VTI.Vec MQPR:$val1), ARMVCCThen,
-                       (VTI.Pred VCCR:$pred), (VTI.Vec MQPR:$inactive)))>;
+                       (VTI.Pred VCCR:$pred), zero_reg, (VTI.Vec MQPR:$inactive)))>;
   }
 }
 
@@ -1724,7 +1724,7 @@ multiclass MVE_bit_cmode_p<string iname, bit opcode,
     def : Pat<(VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                           UnpredPat, (VTI.Vec MQPR:$src))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$src), imm_type:$simm,
-                             ARMVCCThen, (VTI.Pred VCCR:$pred)))>;
+                             ARMVCCThen, (VTI.Pred VCCR:$pred), zero_reg))>;
   }
 }
 
@@ -2206,7 +2206,7 @@ multiclass MVE_VRHADD_m<MVEVectorVTInfo VTI,
                             (i32 VTI.Unsigned), (VTI.Pred VCCR:$mask),
                             (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
   }
 }
@@ -2294,7 +2294,7 @@ multiclass MVE_VHADD_m<MVEVectorVTInfo VTI,
     def : Pat<(VTI.Vec (pred_int (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn), (i32 VTI.Unsigned),
                             (VTI.Pred VCCR:$mask), (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
   }
 }
@@ -2335,7 +2335,7 @@ multiclass MVE_VHSUB_m<MVEVectorVTInfo VTI,
                             (i32 VTI.Unsigned), (VTI.Pred VCCR:$mask),
                             (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
   }
 }
@@ -2393,27 +2393,27 @@ let Predicates = [HasMVEInt] in {
   def : Pat<(v16i8 (vselect (v16i1 VCCR:$pred),
                             (v16i8 (ARMvdup (i32 rGPR:$elem))),
                             (v16i8 MQPR:$inactive))),
-            (MVE_VDUP8  rGPR:$elem, ARMVCCThen, (v16i1 VCCR:$pred),
+            (MVE_VDUP8  rGPR:$elem, ARMVCCThen, (v16i1 VCCR:$pred), zero_reg,
                         (v16i8 MQPR:$inactive))>;
   def : Pat<(v8i16 (vselect (v8i1 VCCR:$pred),
                             (v8i16 (ARMvdup (i32 rGPR:$elem))),
                             (v8i16 MQPR:$inactive))),
-            (MVE_VDUP16 rGPR:$elem, ARMVCCThen, (v8i1 VCCR:$pred),
+            (MVE_VDUP16 rGPR:$elem, ARMVCCThen, (v8i1 VCCR:$pred), zero_reg,
                             (v8i16 MQPR:$inactive))>;
   def : Pat<(v4i32 (vselect (v4i1 VCCR:$pred),
                             (v4i32 (ARMvdup (i32 rGPR:$elem))),
                             (v4i32 MQPR:$inactive))),
-            (MVE_VDUP32 rGPR:$elem, ARMVCCThen, (v4i1 VCCR:$pred),
+            (MVE_VDUP32 rGPR:$elem, ARMVCCThen, (v4i1 VCCR:$pred), zero_reg,
                             (v4i32 MQPR:$inactive))>;
   def : Pat<(v4f32 (vselect (v4i1 VCCR:$pred),
                             (v4f32 (ARMvdup (i32 rGPR:$elem))),
                             (v4f32 MQPR:$inactive))),
-            (MVE_VDUP32 rGPR:$elem, ARMVCCThen, (v4i1 VCCR:$pred),
+            (MVE_VDUP32 rGPR:$elem, ARMVCCThen, (v4i1 VCCR:$pred), zero_reg,
                             (v4f32 MQPR:$inactive))>;
   def : Pat<(v8f16 (vselect (v8i1 VCCR:$pred),
                             (v8f16 (ARMvdup (i32 rGPR:$elem))),
                             (v8f16 MQPR:$inactive))),
-            (MVE_VDUP16 rGPR:$elem, ARMVCCThen, (v8i1 VCCR:$pred),
+            (MVE_VDUP16 rGPR:$elem, ARMVCCThen, (v8i1 VCCR:$pred), zero_reg,
                             (v8f16 MQPR:$inactive))>;
 }
 
@@ -2461,7 +2461,7 @@ multiclass MVE_VCLSCLZ_p<string opname, bit opcode, MVEVectorVTInfo VTI,
     def : Pat<(VTI.Vec (pred_int (VTI.Vec MQPR:$val), (VTI.Pred VCCR:$pred),
                                  (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$val), ARMVCCThen,
-                             (VTI.Pred VCCR:$pred), (VTI.Vec MQPR:$inactive)))>;
+                             (VTI.Pred VCCR:$pred), zero_reg, (VTI.Vec MQPR:$inactive)))>;
   }
 }
 
@@ -2507,7 +2507,7 @@ multiclass MVE_VABSNEG_int_m<string iname, bit negate, bit saturate,
 
     def : Pat<(VTI.Vec (pred_int  (VTI.Vec MQPR:$v), (VTI.Pred VCCR:$mask),
                                   (VTI.Vec MQPR:$inactive))),
-              (VTI.Vec (Inst $v, ARMVCCThen, $mask, $inactive))>;
+              (VTI.Vec (Inst $v, ARMVCCThen, $mask, zero_reg, $inactive))>;
   }
 }
 
@@ -2631,11 +2631,11 @@ let Predicates = [HasMVEInt] in {
   def : Pat<(v8i16 (vselect (v8i1 VCCR:$pred), (ARMvmvnImm timm:$simm),
                             MQPR:$inactive)),
             (v8i16 (MVE_VMVNimmi16 nImmSplatI16:$simm,
-                            ARMVCCThen, VCCR:$pred, MQPR:$inactive))>;
+                            ARMVCCThen, VCCR:$pred, zero_reg, MQPR:$inactive))>;
   def : Pat<(v4i32 (vselect (v4i1 VCCR:$pred), (ARMvmvnImm timm:$simm),
                             MQPR:$inactive)),
             (v4i32 (MVE_VMVNimmi32 nImmSplatI32:$simm,
-                            ARMVCCThen, VCCR:$pred, MQPR:$inactive))>;
+                            ARMVCCThen, VCCR:$pred, zero_reg, MQPR:$inactive))>;
 }
 
 class MVE_VMINMAXA<string iname, string suffix, bits<2> size,
@@ -2676,7 +2676,7 @@ multiclass MVE_VMINMAXA_m<string iname, MVEVectorVTInfo VTI,
     def : Pat<(VTI.Vec (pred_int (VTI.Vec MQPR:$Qd), (VTI.Vec MQPR:$Qm),
                             (VTI.Pred VCCR:$mask))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qd), (VTI.Vec MQPR:$Qm),
-                            ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                            ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
   }
 }
 
@@ -2757,7 +2757,7 @@ multiclass MVE_VMOVL_m<bit top, string chr, MVEVectorVTInfo OutVTI,
                             (OutVTI.Pred VCCR:$pred),
                             (OutVTI.Vec MQPR:$inactive))),
             (OutVTI.Vec (Inst (InVTI.Vec MQPR:$src), ARMVCCThen,
-                            (OutVTI.Pred VCCR:$pred),
+                            (OutVTI.Pred VCCR:$pred), zero_reg,
                             (OutVTI.Vec MQPR:$inactive)))>;
 }
 
@@ -2895,14 +2895,14 @@ multiclass MVE_VSHLL_patterns<MVEVectorVTInfo VTI, int top> {
                                     (VTI.DblPred VCCR:$mask),
                                     (VTI.DblVec MQPR:$inactive))),
             (VTI.DblVec (inst_imm   (VTI.Vec MQPR:$src), imm:$imm,
-                                    ARMVCCThen, (VTI.DblPred VCCR:$mask),
+                                    ARMVCCThen, (VTI.DblPred VCCR:$mask), zero_reg,
                                     (VTI.DblVec MQPR:$inactive)))>;
   def : Pat<(VTI.DblVec (pred_int   (VTI.Vec MQPR:$src), (i32 VTI.LaneBits),
                                     (i32 VTI.Unsigned), (i32 top),
                                     (VTI.DblPred VCCR:$mask),
                                     (VTI.DblVec MQPR:$inactive))),
             (VTI.DblVec (inst_lw    (VTI.Vec MQPR:$src), ARMVCCThen,
-                                    (VTI.DblPred VCCR:$mask),
+                                    (VTI.DblPred VCCR:$mask), zero_reg,
                                     (VTI.DblVec MQPR:$inactive)))>;
 }
 
@@ -3063,7 +3063,7 @@ multiclass MVE_VSHRN_patterns<MVE_shift_imm_partial inst,
             (OutVTI.Vec outparams)>;
   def : Pat<(OutVTI.Vec !con(inparams, (int_arm_mve_vshrn_predicated
                                            (InVTI.Pred VCCR:$pred)))),
-            (OutVTI.Vec !con(outparams, (? ARMVCCThen, VCCR:$pred)))>;
+            (OutVTI.Vec !con(outparams, (? ARMVCCThen, VCCR:$pred, zero_reg)))>;
 }
 
 defm : MVE_VSHRN_patterns<MVE_VSHRNi16bh,    MVE_v16s8, MVE_v8s16, 0,0,0>;
@@ -3153,7 +3153,7 @@ multiclass MVE_shift_by_vec_p<string iname, MVEVectorVTInfo VTI, bit q, bit r> {
                          (i32 q), (i32 r), (i32 VTI.Unsigned),
                          (VTI.Pred VCCR:$mask), (VTI.Vec MQPR:$inactive))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$in), (VTI.Vec MQPR:$sh),
-                           ARMVCCThen, (VTI.Pred VCCR:$mask),
+                           ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                            (VTI.Vec MQPR:$inactive)))>;
 }
 
@@ -3264,7 +3264,7 @@ multiclass MVE_VSxI_patterns<MVE_VSxI_imm inst, string name,
   def : Pat<(VTI.Vec !setdagop(inparams, unpred_int)),
             (VTI.Vec outparams)>;
   def : Pat<(VTI.Vec !con(inparams, (pred_int (VTI.Pred VCCR:$pred)))),
-            (VTI.Vec !con(outparams, (? ARMVCCThen, VCCR:$pred)))>;
+            (VTI.Vec !con(outparams, (? ARMVCCThen, VCCR:$pred, zero_reg)))>;
 }
 
 defm : MVE_VSxI_patterns<MVE_VSLIimm8,  "vsli", MVE_v16i8>;
@@ -3401,7 +3401,7 @@ multiclass MVE_shift_imm_patterns<MVE_shift_with_imm inst> {
                                   (inst.VTI.Vec MQPR:$inactive)))),
             (inst.VTI.Vec (inst (inst.VTI.Vec MQPR:$src),
                                 inst.immediateType:$imm,
-                                ARMVCCThen, (inst.VTI.Pred VCCR:$mask),
+                                ARMVCCThen, (inst.VTI.Pred VCCR:$mask), zero_reg,
                                 (inst.VTI.Vec MQPR:$inactive)))>;
 }
 
@@ -3498,7 +3498,7 @@ multiclass MVE_immediate_shift_patterns_inner<
                           (pred_int (VTI.Pred VCCR:$mask),
                                    (VTI.Vec MQPR:$inactive)))),
             (VTI.Vec (inst (VTI.Vec MQPR:$src), imm_operand_type:$imm,
-                           ARMVCCThen, (VTI.Pred VCCR:$mask),
+                           ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                            (VTI.Vec MQPR:$inactive)))>;
 }
 
@@ -3569,7 +3569,7 @@ multiclass MVE_VRINT_m<MVEVectorVTInfo VTI, string suffix, bits<3> opcode,
     def : Pat<(VTI.Vec (pred_int (VTI.Vec MQPR:$val), (VTI.Pred VCCR:$pred),
                                  (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$val), ARMVCCThen,
-                             (VTI.Pred VCCR:$pred), (VTI.Vec MQPR:$inactive)))>;
+                             (VTI.Pred VCCR:$pred), zero_reg, (VTI.Vec MQPR:$inactive)))>;
   }
 }
 
@@ -3666,7 +3666,7 @@ multiclass MVE_VCMLA_m<MVEVectorVTInfo VTI, bit size> {
                             (VTI.Pred VCCR:$mask))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qd_src), (VTI.Vec MQPR:$Qn),
                              (VTI.Vec MQPR:$Qm), imm:$rot,
-                             ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
 
   }
 }
@@ -3714,20 +3714,20 @@ multiclass MVE_VFMA_fp_multi<string iname, bit fms, MVEVectorVTInfo VTI> {
       def : Pat<(VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                                   (VTI.Vec (fma (fneg m1), m2, add)),
                                   add)),
-                (Inst $add, $m1, $m2, ARMVCCThen, $pred)>;
+                (Inst $add, $m1, $m2, ARMVCCThen, $pred, zero_reg)>;
       def : Pat<(VTI.Vec (pred_int (fneg m1), m2, add, pred)),
-                (Inst $add, $m1, $m2, ARMVCCThen, $pred)>;
+                (Inst $add, $m1, $m2, ARMVCCThen, $pred, zero_reg)>;
       def : Pat<(VTI.Vec (pred_int m1, (fneg m2), add, pred)),
-                (Inst $add, $m1, $m2, ARMVCCThen, $pred)>;
+                (Inst $add, $m1, $m2, ARMVCCThen, $pred, zero_reg)>;
     } else {
       def : Pat<(VTI.Vec (fma m1, m2, add)),
                 (Inst $add, $m1, $m2)>;
       def : Pat<(VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                                   (VTI.Vec (fma m1, m2, add)),
                                   add)),
-                (Inst $add, $m1, $m2, ARMVCCThen, $pred)>;
+                (Inst $add, $m1, $m2, ARMVCCThen, $pred, zero_reg)>;
       def : Pat<(VTI.Vec (pred_int m1, m2, add, pred)),
-                (Inst $add, $m1, $m2, ARMVCCThen, $pred)>;
+                (Inst $add, $m1, $m2, ARMVCCThen, $pred, zero_reg)>;
     }
   }
 }
@@ -3796,7 +3796,7 @@ multiclass MVE_VCADD_m<MVEVectorVTInfo VTI, bit size, string cstr=""> {
                             (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
                             (VTI.Pred VCCR:$mask))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
-                             imm:$rot, ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             imm:$rot, ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
 
   }
@@ -3838,7 +3838,7 @@ multiclass MVE_VABDT_fp_m<MVEVectorVTInfo VTI,
                             (i32 0), (VTI.Pred VCCR:$mask),
                             (VTI.Vec MQPR:$inactive))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
   }
 }
@@ -3914,7 +3914,7 @@ multiclass MVE_VCVT_fix_patterns<Instruction Inst, bit U, MVEVectorVTInfo DestVT
                               imm:$scale,
                               (DestVTI.Pred VCCR:$mask))),
               (DestVTI.Vec (Inst (SrcVTI.Vec MQPR:$Qm), imm:$scale,
-                             ARMVCCThen, (DestVTI.Pred VCCR:$mask),
+                             ARMVCCThen, (DestVTI.Pred VCCR:$mask), zero_reg,
                              (DestVTI.Vec MQPR:$inactive)))>;
   }
 }
@@ -3977,7 +3977,7 @@ multiclass MVE_VCVT_fp_int_anpm_inner<MVEVectorVTInfo Int, MVEVectorVTInfo Flt,
     def : Pat<(Int.Vec (PredIntr (i32 Int.Unsigned), (Int.Vec MQPR:$inactive),
                                  (Flt.Vec MQPR:$in), (Flt.Pred VCCR:$pred))),
               (Int.Vec (Inst (Flt.Vec MQPR:$in), ARMVCCThen,
-                             (Flt.Pred VCCR:$pred), (Int.Vec MQPR:$inactive)))>;
+                             (Flt.Pred VCCR:$pred), zero_reg, (Int.Vec MQPR:$inactive)))>;
   }
 }
 
@@ -4033,7 +4033,7 @@ multiclass MVE_VCVT_fp_int_m<MVEVectorVTInfo Dest, MVEVectorVTInfo Src,
                              (Src.Vec MQPR:$src), (i32 Unsigned),
                              (Src.Pred VCCR:$mask), (Dest.Vec MQPR:$inactive))),
               (Dest.Vec (Inst (Src.Vec MQPR:$src), ARMVCCThen,
-                              (Src.Pred VCCR:$mask),
+                              (Src.Pred VCCR:$mask), zero_reg,
                               (Dest.Vec MQPR:$inactive)))>;
   }
 }
@@ -4089,7 +4089,7 @@ multiclass MVE_VABSNEG_fp_m<string iname, SDNode unpred_op, Intrinsic pred_int,
               (VTI.Vec (Inst $v))>;
     def : Pat<(VTI.Vec (pred_int  (VTI.Vec MQPR:$v), (VTI.Pred VCCR:$mask),
                                   (VTI.Vec MQPR:$inactive))),
-              (VTI.Vec (Inst $v, ARMVCCThen, $mask, $inactive))>;
+              (VTI.Vec (Inst $v, ARMVCCThen, $mask, zero_reg, $inactive))>;
   }
 }
 
@@ -4142,7 +4142,7 @@ multiclass MVE_VMAXMINNMA_m<string iname, MVEVectorVTInfo VTI,
     def : Pat<(VTI.Vec (pred_int (VTI.Vec MQPR:$Qd), (VTI.Vec MQPR:$Qm),
                             (VTI.Pred VCCR:$mask))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qd), (VTI.Vec MQPR:$Qm),
-                            ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                            ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
   }
 }
 
@@ -4310,11 +4310,11 @@ multiclass unpred_vcmp_z<string suffix, PatLeaf fc> {
                 (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32r") (v4i32 MQPR:$v1), ZR, fc))>;
 
   def : Pat<(v16i1 (and (v16i1 VCCR:$p1), (v16i1 (ARMvcmpz (v16i8 MQPR:$v1), fc)))),
-            (v16i1 (!cast<Instruction>("MVE_VCMP"#suffix#"8r") (v16i8 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1))>;
+            (v16i1 (!cast<Instruction>("MVE_VCMP"#suffix#"8r") (v16i8 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v8i1 (and (v8i1 VCCR:$p1), (v8i1 (ARMvcmpz (v8i16 MQPR:$v1), fc)))),
-            (v8i1 (!cast<Instruction>("MVE_VCMP"#suffix#"16r") (v8i16 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1))>;
+            (v8i1 (!cast<Instruction>("MVE_VCMP"#suffix#"16r") (v8i16 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v4i1 (and (v4i1 VCCR:$p1), (v4i1 (ARMvcmpz (v4i32 MQPR:$v1), fc)))),
-            (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32r") (v4i32 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1))>;
+            (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32r") (v4i32 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
 }
 
 multiclass unpred_vcmp_r<string suffix, PatLeaf fc> {
@@ -4333,18 +4333,18 @@ multiclass unpred_vcmp_r<string suffix, PatLeaf fc> {
                  (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32r") (v4i32 MQPR:$v1), (i32 rGPR:$v2), fc))>;
 
   def : Pat<(v16i1 (and (v16i1 VCCR:$p1), (v16i1 (ARMvcmp (v16i8 MQPR:$v1), (v16i8 MQPR:$v2), fc)))),
-            (v16i1 (!cast<Instruction>("MVE_VCMP"#suffix#"8") (v16i8 MQPR:$v1), (v16i8 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v16i1 (!cast<Instruction>("MVE_VCMP"#suffix#"8") (v16i8 MQPR:$v1), (v16i8 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v8i1 (and (v8i1 VCCR:$p1), (v8i1 (ARMvcmp (v8i16 MQPR:$v1), (v8i16 MQPR:$v2), fc)))),
-            (v8i1 (!cast<Instruction>("MVE_VCMP"#suffix#"16") (v8i16 MQPR:$v1), (v8i16 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v8i1 (!cast<Instruction>("MVE_VCMP"#suffix#"16") (v8i16 MQPR:$v1), (v8i16 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v4i1 (and (v4i1 VCCR:$p1), (v4i1 (ARMvcmp (v4i32 MQPR:$v1), (v4i32 MQPR:$v2), fc)))),
-            (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32") (v4i32 MQPR:$v1), (v4i32 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32") (v4i32 MQPR:$v1), (v4i32 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
 
   def : Pat<(v16i1 (and (v16i1 VCCR:$p1), (v16i1 (ARMvcmp (v16i8 MQPR:$v1), (v16i8 (ARMvdup rGPR:$v2)), fc)))),
-            (v16i1 (!cast<Instruction>("MVE_VCMP"#suffix#"8r") (v16i8 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v16i1 (!cast<Instruction>("MVE_VCMP"#suffix#"8r") (v16i8 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v8i1 (and (v8i1 VCCR:$p1), (v8i1 (ARMvcmp (v8i16 MQPR:$v1), (v8i16 (ARMvdup rGPR:$v2)), fc)))),
-            (v8i1 (!cast<Instruction>("MVE_VCMP"#suffix#"16r") (v8i16 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v8i1 (!cast<Instruction>("MVE_VCMP"#suffix#"16r") (v8i16 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v4i1 (and (v4i1 VCCR:$p1), (v4i1 (ARMvcmp (v4i32 MQPR:$v1), (v4i32 (ARMvdup rGPR:$v2)), fc)))),
-            (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32r") (v4i32 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v4i1 (!cast<Instruction>("MVE_VCMP"#suffix#"32r") (v4i32 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
 }
 
 multiclass unpred_vcmpf_z<PatLeaf fc> {
@@ -4354,9 +4354,9 @@ multiclass unpred_vcmpf_z<PatLeaf fc> {
                 (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), ZR, fc))>;
 
   def : Pat<(v8i1 (and (v8i1 VCCR:$p1), (v8i1 (ARMvcmpz (v8f16 MQPR:$v1), fc)))),
-            (v8i1 (MVE_VCMPf16r (v8f16 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1))>;
+            (v8i1 (MVE_VCMPf16r (v8f16 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v4i1 (and (v4i1 VCCR:$p1), (v4i1 (ARMvcmpz (v4f32 MQPR:$v1), fc)))),
-            (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1))>;
+            (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), ZR, fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
 }
 
 multiclass unpred_vcmpf_r<PatLeaf fc> {
@@ -4371,14 +4371,14 @@ multiclass unpred_vcmpf_r<PatLeaf fc> {
             (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), (i32 rGPR:$v2), fc))>;
 
   def : Pat<(v8i1 (and (v8i1 VCCR:$p1), (v8i1 (ARMvcmp (v8f16 MQPR:$v1), (v8f16 MQPR:$v2), fc)))),
-            (v8i1 (MVE_VCMPf16 (v8f16 MQPR:$v1), (v8f16 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v8i1 (MVE_VCMPf16 (v8f16 MQPR:$v1), (v8f16 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v4i1 (and (v4i1 VCCR:$p1), (v4i1 (ARMvcmp (v4f32 MQPR:$v1), (v4f32 MQPR:$v2), fc)))),
-            (v4i1 (MVE_VCMPf32 (v4f32 MQPR:$v1), (v4f32 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v4i1 (MVE_VCMPf32 (v4f32 MQPR:$v1), (v4f32 MQPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
 
   def : Pat<(v8i1 (and (v8i1 VCCR:$p1), (v8i1 (ARMvcmp (v8f16 MQPR:$v1), (v8f16 (ARMvdup rGPR:$v2)), fc)))),
-            (v8i1 (MVE_VCMPf16r (v8f16 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v8i1 (MVE_VCMPf16r (v8f16 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
   def : Pat<(v4i1 (and (v4i1 VCCR:$p1), (v4i1 (ARMvcmp (v4f32 MQPR:$v1), (v4f32 (ARMvdup rGPR:$v2)), fc)))),
-            (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1))>;
+            (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), (i32 rGPR:$v2), fc, ARMVCCThen, VCCR:$p1, zero_reg))>;
 }
 
 let Predicates = [HasMVEInt] in {
@@ -4541,7 +4541,7 @@ multiclass MVE_VQxDMLxDH_p<string iname, bit exch, bit round, bit subtract,
                           (? (VTI.Pred VCCR:$pred)))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$a), (VTI.Vec MQPR:$b),
                            (VTI.Vec MQPR:$c),
-                           ARMVCCThen, (VTI.Pred VCCR:$pred)))>;
+                           ARMVCCThen, (VTI.Pred VCCR:$pred), zero_reg))>;
 }
 
 multiclass MVE_VQxDMLxDH_multi<string iname, bit exch,
@@ -4595,7 +4595,7 @@ multiclass MVE_VCMUL_m<string iname, MVEVectorVTInfo VTI,
                             (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
                             (VTI.Pred VCCR:$mask))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
-                             imm:$rot, ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             imm:$rot, ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
 
   }
@@ -4647,7 +4647,7 @@ multiclass MVE_VMULL_m<MVEVectorVTInfo VTI,
                                uflag, (? (i32 Top), (VTI.DblPred VCCR:$mask),
                                          (VTI.DblVec MQPR:$inactive)))),
               (VTI.DblVec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                                ARMVCCThen, (VTI.DblPred VCCR:$mask),
+                                ARMVCCThen, (VTI.DblPred VCCR:$mask), zero_reg,
                                 (VTI.DblVec MQPR:$inactive)))>;
   }
 }
@@ -4772,7 +4772,7 @@ multiclass MVE_VxMULH_m<string iname, MVEVectorVTInfo VTI, SDNode unpred_op,
                               (i32 VTI.Unsigned), (VTI.Pred VCCR:$mask),
                               (VTI.Vec MQPR:$inactive))),
                 (VTI.Vec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                              ARMVCCThen, (VTI.Pred VCCR:$mask),
+                              ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                               (VTI.Vec MQPR:$inactive)))>;
     }
 
@@ -4867,7 +4867,7 @@ multiclass MVE_VMOVN_p<Instruction Inst, bit top,
                                   (InVTI.Pred VCCR:$pred))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$Qd_src),
                               (InVTI.Vec MQPR:$Qm),
-                              ARMVCCThen, (InVTI.Pred VCCR:$pred)))>;
+                              ARMVCCThen, (InVTI.Pred VCCR:$pred), zero_reg))>;
 }
 
 defm : MVE_VMOVN_p<MVE_VMOVNi32bh, 0, MVE_v8i16, MVE_v4i32>;
@@ -4889,7 +4889,7 @@ multiclass MVE_VQMOVN_p<Instruction Inst, bit outU, bit inU, bit top,
                                   (InVTI.Pred VCCR:$pred))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$Qd_src),
                               (InVTI.Vec MQPR:$Qm),
-                              ARMVCCThen, (InVTI.Pred VCCR:$pred)))>;
+                              ARMVCCThen, (InVTI.Pred VCCR:$pred), zero_reg))>;
 }
 
 defm : MVE_VQMOVN_p<MVE_VQMOVNs32bh,  0, 0, 0, MVE_v8i16, MVE_v4i32>;
@@ -4981,7 +4981,7 @@ multiclass MVE_VCVT_f2h_m<string iname, int half> {
                          (v8f16 MQPR:$Qd_src), (v4f32 MQPR:$Qm), (i32 half),
                          (v4i1 VCCR:$mask))),
               (v8f16 (Inst (v8f16 MQPR:$Qd_src), (v4f32 MQPR:$Qm),
-                           ARMVCCThen, (v4i1 VCCR:$mask)))>;
+                           ARMVCCThen, (v4i1 VCCR:$mask), zero_reg))>;
 
     def : Pat<(v8f16 (MVEvcvtn (v8f16 MQPR:$Qd_src), (v4f32 MQPR:$Qm), (i32 half))),
               (v8f16 (Inst (v8f16 MQPR:$Qd_src), (v4f32 MQPR:$Qm)))>;
@@ -4999,7 +4999,7 @@ multiclass MVE_VCVT_h2f_m<string iname, int half> {
                          (v4f32 MQPR:$inactive), (v8f16 MQPR:$Qm), (i32 half),
                          (v4i1 VCCR:$mask))),
               (v4f32 (Inst (v8f16 MQPR:$Qm), ARMVCCThen,
-                           (v4i1 VCCR:$mask), (v4f32 MQPR:$inactive)))>;
+                           (v4i1 VCCR:$mask), zero_reg, (v4f32 MQPR:$inactive)))>;
 
     def : Pat<(v4f32 (MVEvcvtl (v8f16 MQPR:$Qm), (i32 half))),
               (v4f32 (Inst (v8f16 MQPR:$Qm)))>;
@@ -5045,7 +5045,7 @@ multiclass MVE_VxCADD_m<string iname, MVEVectorVTInfo VTI,
                             (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
                             (VTI.Pred VCCR:$mask))),
               (VTI.Vec (Inst (VTI.Vec MQPR:$Qn), (VTI.Vec MQPR:$Qm),
-                             imm:$rot, ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             imm:$rot, ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
 
   }
@@ -5121,7 +5121,7 @@ multiclass MVE_VQDMULL_m<string iname, MVEVectorVTInfo VTI, bit size, bit T,
                                     (i32 T), (VTI.DblPred VCCR:$mask),
                                     (VTI.DblVec MQPR:$inactive))),
               (VTI.DblVec (Inst (VTI.Vec MQPR:$Qm), (VTI.Vec MQPR:$Qn),
-                                ARMVCCThen, (VTI.DblPred VCCR:$mask),
+                                ARMVCCThen, (VTI.DblPred VCCR:$mask), zero_reg,
                                 (VTI.DblVec MQPR:$inactive)))>;
   }
 }
@@ -5200,7 +5200,7 @@ multiclass MVE_vec_scalar_int_pat_m<Instruction inst, MVEVectorVTInfo VTI,
                             (pred_op (VTI.Pred VCCR:$mask),
                                      (VTI.Vec MQPR:$inactive)))),
               (VTI.Vec (inst (VTI.Vec MQPR:$Qm), (i32 rGPR:$val),
-                             ARMVCCThen, (VTI.Pred VCCR:$mask),
+                             ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                              (VTI.Vec MQPR:$inactive)))>;
   }
 }
@@ -5306,7 +5306,7 @@ multiclass MVE_VQDMULL_qr_m<string iname, MVEVectorVTInfo VTI, bit size,
                                     (VTI.DblPred VCCR:$mask),
                                     (VTI.DblVec MQPR:$inactive))),
               (VTI.DblVec (Inst (VTI.Vec MQPR:$Qm), (i32 rGPR:$val),
-                             ARMVCCThen, (VTI.DblPred VCCR:$mask),
+                             ARMVCCThen, (VTI.DblPred VCCR:$mask), zero_reg,
                              (VTI.DblVec MQPR:$inactive)))>;
   }
 }
@@ -5411,7 +5411,7 @@ multiclass MVE_VxSHL_qr_p<string iname, MVEVectorVTInfo VTI, bit q, bit r> {
                          (i32 q), (i32 r), (i32 VTI.Unsigned),
                          (VTI.Pred VCCR:$mask))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$in), (i32 rGPR:$sh),
-                           ARMVCCThen, (VTI.Pred VCCR:$mask)))>;
+                           ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg))>;
 }
 
 multiclass MVE_VxSHL_qr_types<string iname, bit bit_7, bit bit_17> {
@@ -5470,7 +5470,7 @@ multiclass MVE_VBRSR_pat_m<MVEVectorVTInfo VTI, Instruction Inst> {
                           (VTI.Vec MQPR:$Qn), (i32 rGPR:$Rm),
                           (VTI.Pred VCCR:$mask))),
             (VTI.Vec (Inst (VTI.Vec MQPR:$Qn), (i32 rGPR:$Rm),
-                          ARMVCCThen, (VTI.Pred VCCR:$mask),
+                          ARMVCCThen, (VTI.Pred VCCR:$mask), zero_reg,
                           (VTI.Vec MQPR:$inactive)))>;
 }
 
@@ -5609,7 +5609,7 @@ multiclass MVE_VMLA_qr_multi<string iname, MVEVectorVTInfo VTI,
     }
 
     def : Pat<(VTI.Vec (pred_int v1, v2, s, pred)),
-              (VTI.Vec (Inst v1, v2, s, ARMVCCThen, pred))>;
+              (VTI.Vec (Inst v1, v2, s, ARMVCCThen, pred, zero_reg))>;
   }
 }
 
@@ -5645,9 +5645,9 @@ multiclass MVE_VFMA_qr_multi<string iname, MVEVectorVTInfo VTI,
       def : Pat<(VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                                   (VTI.Vec (fma v1, v2, vs)),
                                   v1)),
-                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, $pred))>;
+                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(VTI.Vec (pred_int v1, v2, vs, pred)),
-                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, pred))>;
+                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, pred, zero_reg))>;
     } else {
       def : Pat<(VTI.Vec (fma v1, vs, v2)),
                 (VTI.Vec (Inst v2, v1, is))>;
@@ -5656,15 +5656,15 @@ multiclass MVE_VFMA_qr_multi<string iname, MVEVectorVTInfo VTI,
       def : Pat<(VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                                   (VTI.Vec (fma vs, v2, v1)),
                                   v1)),
-                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, $pred))>;
+                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(VTI.Vec (vselect (VTI.Pred VCCR:$pred),
                                   (VTI.Vec (fma v2, vs, v1)),
                                   v1)),
-                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, $pred))>;
+                (VTI.Vec (Inst v1, v2, is, ARMVCCThen, $pred, zero_reg))>;
       def : Pat<(VTI.Vec (pred_int v1, vs, v2, pred)),
-                (VTI.Vec (Inst v2, v1, is, ARMVCCThen, pred))>;
+                (VTI.Vec (Inst v2, v1, is, ARMVCCThen, pred, zero_reg))>;
       def : Pat<(VTI.Vec (pred_int vs, v1, v2, pred)),
-                (VTI.Vec (Inst v2, v1, is, ARMVCCThen, pred))>;
+                (VTI.Vec (Inst v2, v1, is, ARMVCCThen, pred, zero_reg))>;
     }
   }
 }
@@ -5704,7 +5704,7 @@ multiclass MVE_VQDMLAH_qr_multi<string iname, MVEVectorVTInfo VTI,
                                    (i32 rGPR:$s), (VTI.Pred VCCR:$pred))),
               (VTI.Vec (Inst       (VTI.Vec MQPR:$v1), (VTI.Vec MQPR:$v2),
                                    (i32 rGPR:$s), ARMVCCThen,
-                                   (VTI.Pred VCCR:$pred)))>;
+                                   (VTI.Pred VCCR:$pred), zero_reg))>;
   }
 }
 
@@ -5817,7 +5817,7 @@ multiclass MVE_VCTP<MVEVectorVTInfo VTI, Intrinsic intr> {
     def : Pat<(intr rGPR:$Rn),
               (VTI.Pred (Inst rGPR:$Rn))>;
     def : Pat<(and (intr rGPR:$Rn), (VTI.Pred VCCR:$mask)),
-              (VTI.Pred (Inst rGPR:$Rn, ARMVCCThen, VCCR:$mask))>;
+              (VTI.Pred (Inst rGPR:$Rn, ARMVCCThen, VCCR:$mask, zero_reg))>;
   }
 }
 
@@ -6349,9 +6349,9 @@ multiclass MVE_VLDR_rq_w<MVE_memsz memsz, list<MVEVectorVTInfo> VTIs> {
     def : Pat<(VTI.Vec (int_arm_mve_vldr_gather_offset GPR:$base, (VTIs[0].Vec MQPR:$offsets), memsz.TypeBits, memsz.shift, UnsignedFlag)),
               (VTI.Vec (Inst GPR:$base, MQPR:$offsets))>;
     def : Pat<(VTI.Vec (int_arm_mve_vldr_gather_offset_predicated GPR:$base, (VTIs[0].Vec MQPR:$offsets), memsz.TypeBits, 0, UnsignedFlag, (VTI.Pred VCCR:$pred))),
-              (VTI.Vec (InstU GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred))>;
+              (VTI.Vec (InstU GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred, zero_reg))>;
     def : Pat<(VTI.Vec (int_arm_mve_vldr_gather_offset_predicated GPR:$base, (VTIs[0].Vec MQPR:$offsets), memsz.TypeBits, memsz.shift, UnsignedFlag, (VTI.Pred VCCR:$pred))),
-              (VTI.Vec (Inst GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred))>;
+              (VTI.Vec (Inst GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred, zero_reg))>;
   }
 }
 multiclass MVE_VLDR_rq_b<list<MVEVectorVTInfo> VTIs> {
@@ -6363,7 +6363,7 @@ multiclass MVE_VLDR_rq_b<list<MVEVectorVTInfo> VTIs> {
     def : Pat<(VTI.Vec (int_arm_mve_vldr_gather_offset GPR:$base, (VTIs[0].Vec MQPR:$offsets), 8, 0, VTI.Unsigned)),
               (VTI.Vec (Inst GPR:$base, MQPR:$offsets))>;
     def : Pat<(VTI.Vec (int_arm_mve_vldr_gather_offset_predicated GPR:$base, (VTIs[0].Vec MQPR:$offsets), 8, 0, VTI.Unsigned, (VTI.Pred VCCR:$pred))),
-              (VTI.Vec (Inst GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred))>;
+              (VTI.Vec (Inst GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred, zero_reg))>;
   }
 }
 multiclass MVE_VSTR_rq_w<MVE_memsz memsz, list<MVEVectorVTInfo> VTIs> {
@@ -6378,9 +6378,9 @@ multiclass MVE_VSTR_rq_w<MVE_memsz memsz, list<MVEVectorVTInfo> VTIs> {
     def : Pat<(int_arm_mve_vstr_scatter_offset GPR:$base, (VTIs[0].Vec MQPR:$offsets), (VTI.Vec MQPR:$data), memsz.TypeBits, memsz.shift),
               (Inst MQPR:$data, GPR:$base, MQPR:$offsets)>;
     def : Pat<(int_arm_mve_vstr_scatter_offset_predicated GPR:$base, (VTIs[0].Vec MQPR:$offsets), (VTI.Vec MQPR:$data), memsz.TypeBits, 0, (VTI.Pred VCCR:$pred)),
-              (InstU MQPR:$data, GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred)>;
+              (InstU MQPR:$data, GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred, zero_reg)>;
     def : Pat<(int_arm_mve_vstr_scatter_offset_predicated GPR:$base, (VTIs[0].Vec MQPR:$offsets), (VTI.Vec MQPR:$data), memsz.TypeBits, memsz.shift, (VTI.Pred VCCR:$pred)),
-              (Inst MQPR:$data, GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred)>;
+              (Inst MQPR:$data, GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred, zero_reg)>;
   }
 }
 multiclass MVE_VSTR_rq_b<list<MVEVectorVTInfo> VTIs> {
@@ -6392,7 +6392,7 @@ multiclass MVE_VSTR_rq_b<list<MVEVectorVTInfo> VTIs> {
     def : Pat<(int_arm_mve_vstr_scatter_offset GPR:$base, (VTIs[0].Vec MQPR:$offsets), (VTI.Vec MQPR:$data), 8, 0),
               (Inst MQPR:$data, GPR:$base, MQPR:$offsets)>;
     def : Pat<(int_arm_mve_vstr_scatter_offset_predicated GPR:$base, (VTIs[0].Vec MQPR:$offsets), (VTI.Vec MQPR:$data), 8, 0, (VTI.Pred VCCR:$pred)),
-              (Inst MQPR:$data, GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred)>;
+              (Inst MQPR:$data, GPR:$base, MQPR:$offsets, ARMVCCThen, VCCR:$pred, zero_reg)>;
   }
 }
 
@@ -6473,7 +6473,7 @@ multiclass MVE_VLDR_qi<MVE_memsz memsz, MVEVectorVTInfo AVTI,
     def : Pat<(DVTI.Vec (int_arm_mve_vldr_gather_base_predicated
                  (AVTI.Vec MQPR:$addr), (i32 imm:$offset), (AVTI.Pred VCCR:$pred))),
               (DVTI.Vec (Inst (AVTI.Vec MQPR:$addr), (i32 imm:$offset),
-                        ARMVCCThen, VCCR:$pred))>;
+                        ARMVCCThen, VCCR:$pred, zero_reg))>;
   }
 }
 multiclass MVE_VSTR_qi<MVE_memsz memsz, MVEVectorVTInfo AVTI,
@@ -6491,7 +6491,7 @@ multiclass MVE_VSTR_qi<MVE_memsz memsz, MVEVectorVTInfo AVTI,
     def : Pat<(int_arm_mve_vstr_scatter_base_predicated
                 (AVTI.Vec MQPR:$addr), (i32 imm:$offset), (DVTI.Vec MQPR:$data), (AVTI.Pred VCCR:$pred)),
               (Inst (DVTI.Vec MQPR:$data), (AVTI.Vec MQPR:$addr),
-                    (i32 imm:$offset), ARMVCCThen, VCCR:$pred)>;
+                    (i32 imm:$offset), ARMVCCThen, VCCR:$pred, zero_reg)>;
     def : Pat<(AVTI.Vec (int_arm_mve_vstr_scatter_base_wb
                 (AVTI.Vec MQPR:$addr), (i32 imm:$offset), (DVTI.Vec MQPR:$data))),
               (AVTI.Vec (InstPre (DVTI.Vec MQPR:$data), (AVTI.Vec MQPR:$addr),
@@ -6499,7 +6499,7 @@ multiclass MVE_VSTR_qi<MVE_memsz memsz, MVEVectorVTInfo AVTI,
     def : Pat<(AVTI.Vec (int_arm_mve_vstr_scatter_base_wb_predicated
                 (AVTI.Vec MQPR:$addr), (i32 imm:$offset), (DVTI.Vec MQPR:$data), (AVTI.Pred VCCR:$pred))),
               (AVTI.Vec (InstPre (DVTI.Vec MQPR:$data), (AVTI.Vec MQPR:$addr),
-                                 (i32 imm:$offset), ARMVCCThen, VCCR:$pred))>;
+                                 (i32 imm:$offset), ARMVCCThen, VCCR:$pred, zero_reg))>;
   }
 }
 
@@ -6754,71 +6754,71 @@ def : MVEInstAlias<"vpsel${vp}." # suffix # "\t$Qd, $Qn, $Qm",
 
 let Predicates = [HasMVEInt] in {
   def : Pat<(v16i8 (vselect (v16i1 VCCR:$pred), (v16i8 MQPR:$v1), (v16i8 MQPR:$v2))),
-            (v16i8 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred))>;
+            (v16i8 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v8i16 (vselect (v8i1 VCCR:$pred), (v8i16 MQPR:$v1), (v8i16 MQPR:$v2))),
-            (v8i16 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred))>;
+            (v8i16 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v4i32 (vselect (v4i1 VCCR:$pred), (v4i32 MQPR:$v1), (v4i32 MQPR:$v2))),
-            (v4i32 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred))>;
+            (v4i32 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred, zero_reg))>;
 
   def : Pat<(v8f16 (vselect (v8i1 VCCR:$pred), (v8f16 MQPR:$v1), (v8f16 MQPR:$v2))),
-            (v8f16 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred))>;
+            (v8f16 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v4f32 (vselect (v4i1 VCCR:$pred), (v4f32 MQPR:$v1), (v4f32 MQPR:$v2))),
-            (v4f32 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred))>;
+            (v4f32 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone, VCCR:$pred, zero_reg))>;
 
   def : Pat<(v16i8 (vselect (v16i8 MQPR:$pred), (v16i8 MQPR:$v1), (v16i8 MQPR:$v2))),
             (v16i8 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone,
-                              (MVE_VCMPi8 (v16i8 MQPR:$pred), (MVE_VMOVimmi8 0), ARMCCne)))>;
+                              (MVE_VCMPi8 (v16i8 MQPR:$pred), (MVE_VMOVimmi8 0), ARMCCne), zero_reg))>;
   def : Pat<(v8i16 (vselect (v8i16 MQPR:$pred), (v8i16 MQPR:$v1), (v8i16 MQPR:$v2))),
             (v8i16 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone,
-                              (MVE_VCMPi16 (v8i16 MQPR:$pred), (MVE_VMOVimmi16 0), ARMCCne)))>;
+                              (MVE_VCMPi16 (v8i16 MQPR:$pred), (MVE_VMOVimmi16 0), ARMCCne), zero_reg))>;
   def : Pat<(v4i32 (vselect (v4i32 MQPR:$pred), (v4i32 MQPR:$v1), (v4i32 MQPR:$v2))),
             (v4i32 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone,
-                              (MVE_VCMPi32 (v4i32 MQPR:$pred), (MVE_VMOVimmi32 0), ARMCCne)))>;
+                              (MVE_VCMPi32 (v4i32 MQPR:$pred), (MVE_VMOVimmi32 0), ARMCCne), zero_reg))>;
 
   def : Pat<(v8f16 (vselect (v8i16 MQPR:$pred), (v8f16 MQPR:$v1), (v8f16 MQPR:$v2))),
             (v8f16 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone,
-                              (MVE_VCMPi16 (v8i16 MQPR:$pred), (MVE_VMOVimmi16 0), ARMCCne)))>;
+                              (MVE_VCMPi16 (v8i16 MQPR:$pred), (MVE_VMOVimmi16 0), ARMCCne), zero_reg))>;
   def : Pat<(v4f32 (vselect (v4i32 MQPR:$pred), (v4f32 MQPR:$v1), (v4f32 MQPR:$v2))),
             (v4f32 (MVE_VPSEL MQPR:$v1, MQPR:$v2, ARMVCCNone,
-                              (MVE_VCMPi32 (v4i32 MQPR:$pred), (MVE_VMOVimmi32 0), ARMCCne)))>;
+                              (MVE_VCMPi32 (v4i32 MQPR:$pred), (MVE_VMOVimmi32 0), ARMCCne), zero_reg))>;
 
   // Pred <-> Int
   def : Pat<(v16i8 (zext  (v16i1 VCCR:$pred))),
-            (v16i8 (MVE_VPSEL (MVE_VMOVimmi8 1), (MVE_VMOVimmi8 0), ARMVCCNone, VCCR:$pred))>;
+            (v16i8 (MVE_VPSEL (MVE_VMOVimmi8 1), (MVE_VMOVimmi8 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v8i16 (zext  (v8i1  VCCR:$pred))),
-            (v8i16 (MVE_VPSEL (MVE_VMOVimmi16 1), (MVE_VMOVimmi16 0), ARMVCCNone, VCCR:$pred))>;
+            (v8i16 (MVE_VPSEL (MVE_VMOVimmi16 1), (MVE_VMOVimmi16 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v4i32 (zext  (v4i1  VCCR:$pred))),
-            (v4i32 (MVE_VPSEL (MVE_VMOVimmi32 1), (MVE_VMOVimmi32 0), ARMVCCNone, VCCR:$pred))>;
+            (v4i32 (MVE_VPSEL (MVE_VMOVimmi32 1), (MVE_VMOVimmi32 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
 
   def : Pat<(v16i8 (sext  (v16i1 VCCR:$pred))),
-            (v16i8 (MVE_VPSEL (MVE_VMOVimmi8 255), (MVE_VMOVimmi8 0), ARMVCCNone, VCCR:$pred))>;
+            (v16i8 (MVE_VPSEL (MVE_VMOVimmi8 255), (MVE_VMOVimmi8 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v8i16 (sext  (v8i1  VCCR:$pred))),
-            (v8i16 (MVE_VPSEL (MVE_VMOVimmi8 255), (MVE_VMOVimmi16 0), ARMVCCNone, VCCR:$pred))>;
+            (v8i16 (MVE_VPSEL (MVE_VMOVimmi8 255), (MVE_VMOVimmi16 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v4i32 (sext  (v4i1  VCCR:$pred))),
-            (v4i32 (MVE_VPSEL (MVE_VMOVimmi8 255), (MVE_VMOVimmi32 0), ARMVCCNone, VCCR:$pred))>;
+            (v4i32 (MVE_VPSEL (MVE_VMOVimmi8 255), (MVE_VMOVimmi32 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
 
   def : Pat<(v16i8 (anyext  (v16i1 VCCR:$pred))),
-            (v16i8 (MVE_VPSEL (MVE_VMOVimmi8 1), (MVE_VMOVimmi8 0), ARMVCCNone, VCCR:$pred))>;
+            (v16i8 (MVE_VPSEL (MVE_VMOVimmi8 1), (MVE_VMOVimmi8 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v8i16 (anyext  (v8i1  VCCR:$pred))),
-            (v8i16 (MVE_VPSEL (MVE_VMOVimmi16 1), (MVE_VMOVimmi16 0), ARMVCCNone, VCCR:$pred))>;
+            (v8i16 (MVE_VPSEL (MVE_VMOVimmi16 1), (MVE_VMOVimmi16 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
   def : Pat<(v4i32 (anyext  (v4i1  VCCR:$pred))),
-            (v4i32 (MVE_VPSEL (MVE_VMOVimmi32 1), (MVE_VMOVimmi32 0), ARMVCCNone, VCCR:$pred))>;
+            (v4i32 (MVE_VPSEL (MVE_VMOVimmi32 1), (MVE_VMOVimmi32 0), ARMVCCNone, VCCR:$pred, zero_reg))>;
 }
 
 let Predicates = [HasMVEFloat] in {
   // Pred <-> Float
   // 112 is 1.0 in float
   def : Pat<(v4f32 (uint_to_fp (v4i1 VCCR:$pred))),
-            (v4f32 (MVE_VPSEL (v4f32 (MVE_VMOVimmf32 112)), (v4f32 (MVE_VMOVimmi32 0)), ARMVCCNone, VCCR:$pred))>;
+            (v4f32 (MVE_VPSEL (v4f32 (MVE_VMOVimmf32 112)), (v4f32 (MVE_VMOVimmi32 0)), ARMVCCNone, VCCR:$pred, zero_reg))>;
   // 2620 in 1.0 in half
   def : Pat<(v8f16 (uint_to_fp (v8i1 VCCR:$pred))),
-            (v8f16 (MVE_VPSEL (v8f16 (MVE_VMOVimmi16 2620)), (v8f16 (MVE_VMOVimmi16 0)), ARMVCCNone, VCCR:$pred))>;
+            (v8f16 (MVE_VPSEL (v8f16 (MVE_VMOVimmi16 2620)), (v8f16 (MVE_VMOVimmi16 0)), ARMVCCNone, VCCR:$pred, zero_reg))>;
   // 240 is -1.0 in float
   def : Pat<(v4f32 (sint_to_fp (v4i1 VCCR:$pred))),
-            (v4f32 (MVE_VPSEL (v4f32 (MVE_VMOVimmf32 240)), (v4f32 (MVE_VMOVimmi32 0)), ARMVCCNone, VCCR:$pred))>;
+            (v4f32 (MVE_VPSEL (v4f32 (MVE_VMOVimmf32 240)), (v4f32 (MVE_VMOVimmi32 0)), ARMVCCNone, VCCR:$pred, zero_reg))>;
   // 2748 is -1.0 in half
   def : Pat<(v8f16 (sint_to_fp (v8i1 VCCR:$pred))),
-            (v8f16 (MVE_VPSEL (v8f16 (MVE_VMOVimmi16 2748)), (v8f16 (MVE_VMOVimmi16 0)), ARMVCCNone, VCCR:$pred))>;
+            (v8f16 (MVE_VPSEL (v8f16 (MVE_VMOVimmi16 2748)), (v8f16 (MVE_VMOVimmi16 0)), ARMVCCNone, VCCR:$pred, zero_reg))>;
 
   def : Pat<(v4i1 (fp_to_uint (v4f32 MQPR:$v1))),
             (v4i1 (MVE_VCMPf32r (v4f32 MQPR:$v1), ZR, ARMCCne))>;
@@ -7175,7 +7175,7 @@ class MVE_vector_store_typed<ValueType Ty, Instruction RegImmInst,
 class MVE_vector_maskedstore_typed<ValueType Ty, Instruction RegImmInst,
                                    PatFrag StoreKind, int shift>
   : Pat<(StoreKind (Ty MQPR:$val), t2addrmode_imm7<shift>:$addr, VCCR:$pred),
-        (RegImmInst (Ty MQPR:$val), t2addrmode_imm7<shift>:$addr, ARMVCCThen, VCCR:$pred)>;
+        (RegImmInst (Ty MQPR:$val), t2addrmode_imm7<shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg)>;
 
 multiclass MVE_vector_store<Instruction RegImmInst, PatFrag StoreKind,
                             int shift> {
@@ -7196,7 +7196,7 @@ class MVE_vector_load_typed<ValueType Ty, Instruction RegImmInst,
 class MVE_vector_maskedload_typed<ValueType Ty, Instruction RegImmInst,
                                   PatFrag LoadKind, int shift>
   : Pat<(Ty (LoadKind t2addrmode_imm7<shift>:$addr, VCCR:$pred, (Ty (ARMvmovImm (i32 0))))),
-        (Ty (RegImmInst t2addrmode_imm7<shift>:$addr, ARMVCCThen, VCCR:$pred))>;
+        (Ty (RegImmInst t2addrmode_imm7<shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg))>;
 
 multiclass MVE_vector_load<Instruction RegImmInst, PatFrag LoadKind,
                            int shift> {
@@ -7217,7 +7217,7 @@ class MVE_vector_offset_store_typed<ValueType Ty, Instruction Opcode,
 class MVE_vector_offset_maskedstore_typed<ValueType Ty, Instruction Opcode,
                                           PatFrag StoreKind, int shift>
   : Pat<(StoreKind (Ty MQPR:$Rt), tGPR:$Rn, t2am_imm7_offset<shift>:$addr, VCCR:$pred),
-        (Opcode MQPR:$Rt, tGPR:$Rn, t2am_imm7_offset<shift>:$addr, ARMVCCThen, VCCR:$pred)>;
+        (Opcode MQPR:$Rt, tGPR:$Rn, t2am_imm7_offset<shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg)>;
 
 multiclass MVE_vector_offset_store<Instruction RegImmInst, PatFrag StoreKind,
                                    int shift> {
@@ -7347,11 +7347,11 @@ multiclass MVEExtLoadStore<Instruction LoadSInst, Instruction LoadUInst, string
 
   // Masked trunc stores
   def : Pat<(!cast<PatFrag>("aligned_truncmaskedst"#Amble) (VT MQPR:$val), taddrmode_imm7<Shift>:$addr, VCCR:$pred),
-            (!cast<Instruction>(StoreInst) MQPR:$val, taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred)>;
+            (!cast<Instruction>(StoreInst) MQPR:$val, taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg)>;
   def : Pat<(!cast<PatFrag>("aligned_post_truncmaskedst"#Amble) (VT MQPR:$Rt), tGPR:$Rn, t2am_imm7_offset<Shift>:$addr, VCCR:$pred),
-            (!cast<Instruction>(StoreInst#"_post") MQPR:$Rt, tGPR:$Rn, t2am_imm7_offset<Shift>:$addr, ARMVCCThen, VCCR:$pred)>;
+            (!cast<Instruction>(StoreInst#"_post") MQPR:$Rt, tGPR:$Rn, t2am_imm7_offset<Shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg)>;
   def : Pat<(!cast<PatFrag>("aligned_pre_truncmaskedst"#Amble) (VT MQPR:$Rt), tGPR:$Rn, t2am_imm7_offset<Shift>:$addr, VCCR:$pred),
-            (!cast<Instruction>(StoreInst#"_pre") MQPR:$Rt, tGPR:$Rn, t2am_imm7_offset<Shift>:$addr, ARMVCCThen, VCCR:$pred)>;
+            (!cast<Instruction>(StoreInst#"_pre") MQPR:$Rt, tGPR:$Rn, t2am_imm7_offset<Shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg)>;
 
   // Ext loads
   def : Pat<(VT (!cast<PatFrag>("aligned_extload"#Amble) taddrmode_imm7<Shift>:$addr)),
@@ -7363,11 +7363,11 @@ multiclass MVEExtLoadStore<Instruction LoadSInst, Instruction LoadUInst, string
 
   // Masked ext loads
   def : Pat<(VT (!cast<PatFrag>("aligned_extmaskedload"#Amble) taddrmode_imm7<Shift>:$addr, VCCR:$pred, (VT (ARMvmovImm (i32 0))))),
-            (VT (LoadUInst taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred))>;
+            (VT (LoadUInst taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg))>;
   def : Pat<(VT (!cast<PatFrag>("aligned_sextmaskedload"#Amble) taddrmode_imm7<Shift>:$addr, VCCR:$pred, (VT (ARMvmovImm (i32 0))))),
-            (VT (LoadSInst taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred))>;
+            (VT (LoadSInst taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg))>;
   def : Pat<(VT (!cast<PatFrag>("aligned_zextmaskedload"#Amble) taddrmode_imm7<Shift>:$addr, VCCR:$pred, (VT (ARMvmovImm (i32 0))))),
-            (VT (LoadUInst taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred))>;
+            (VT (LoadUInst taddrmode_imm7<Shift>:$addr, ARMVCCThen, VCCR:$pred, zero_reg))>;
 }
 
 let Predicates = [HasMVEInt] in {

diff  --git a/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp b/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
index 6b114f8d53e63..08e8ac6903801 100644
--- a/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
+++ b/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
@@ -2811,6 +2811,7 @@ static MachineInstr *createPostIncLoadStore(MachineInstr *MI, int Offset,
         .addImm(Offset)
         .add(MI->getOperand(3))
         .add(MI->getOperand(4))
+        .add(MI->getOperand(5))
         .cloneMemRefs(*MI);
   case ARMII::AddrModeT2_i8:
     if (MI->mayLoad()) {

diff  --git a/llvm/lib/Target/ARM/ARMLowOverheadLoops.cpp b/llvm/lib/Target/ARM/ARMLowOverheadLoops.cpp
index 595da77e3327c..579e7d4a9cc5d 100644
--- a/llvm/lib/Target/ARM/ARMLowOverheadLoops.cpp
+++ b/llvm/lib/Target/ARM/ARMLowOverheadLoops.cpp
@@ -880,6 +880,10 @@ static bool producesFalseLanesZero(MachineInstr &MI,
       continue;
     if (!isRegInClass(MO, QPRs) && AllowScalars)
       continue;
+    // Skip the lr predicate reg
+    int PIdx = llvm::findFirstVPTPredOperandIdx(MI);
+    if (PIdx != -1 && (int)MI.getOperandNo(&MO) == PIdx + 2)
+      continue;
 
     // Check that this instruction will produce zeros in its false lanes:
     // - If it only consumes false lanes zero or constant 0 (vmov #0)

diff  --git a/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp b/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
index 339919db2bdea..f0d3c46993127 100644
--- a/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
+++ b/llvm/lib/Target/ARM/AsmParser/ARMAsmParser.cpp
@@ -2478,14 +2478,15 @@ class ARMOperand : public MCParsedAsmOperand {
   }
 
   void addVPTPredNOperands(MCInst &Inst, unsigned N) const {
-    assert(N == 2 && "Invalid number of operands!");
+    assert(N == 3 && "Invalid number of operands!");
     Inst.addOperand(MCOperand::createImm(unsigned(getVPTPred())));
     unsigned RegNum = getVPTPred() == ARMVCC::None ? 0: ARM::P0;
     Inst.addOperand(MCOperand::createReg(RegNum));
+    Inst.addOperand(MCOperand::createReg(0));
   }
 
   void addVPTPredROperands(MCInst &Inst, unsigned N) const {
-    assert(N == 3 && "Invalid number of operands!");
+    assert(N == 4 && "Invalid number of operands!");
     addVPTPredNOperands(Inst, N-1);
     unsigned RegNum;
     if (getVPTPred() == ARMVCC::None) {

diff  --git a/llvm/lib/Target/ARM/Disassembler/ARMDisassembler.cpp b/llvm/lib/Target/ARM/Disassembler/ARMDisassembler.cpp
index f9a786840db00..1d6cb8157b174 100644
--- a/llvm/lib/Target/ARM/Disassembler/ARMDisassembler.cpp
+++ b/llvm/lib/Target/ARM/Disassembler/ARMDisassembler.cpp
@@ -854,12 +854,15 @@ ARMDisassembler::AddThumbPredicate(MCInst &MI) const {
     VCCI = MI.insert(VCCI, MCOperand::createImm(VCC));
     ++VCCI;
     if (VCC == ARMVCC::None)
-      MI.insert(VCCI, MCOperand::createReg(0));
+      VCCI = MI.insert(VCCI, MCOperand::createReg(0));
     else
-      MI.insert(VCCI, MCOperand::createReg(ARM::P0));
+      VCCI = MI.insert(VCCI, MCOperand::createReg(ARM::P0));
+    ++VCCI;
+    VCCI = MI.insert(VCCI, MCOperand::createReg(0));
+    ++VCCI;
     if (OpInfo[VCCPos].OperandType == ARM::OPERAND_VPRED_R) {
       int TiedOp = ARMInsts[MI.getOpcode()].getOperandConstraint(
-        VCCPos + 2, MCOI::TIED_TO);
+        VCCPos + 3, MCOI::TIED_TO);
       assert(TiedOp >= 0 &&
              "Inactive register in vpred_r is not tied to an output!");
       // Copy the operand to ensure it's not invalidated when MI grows.

diff  --git a/llvm/lib/Target/ARM/MVETPAndVPTOptimisationsPass.cpp b/llvm/lib/Target/ARM/MVETPAndVPTOptimisationsPass.cpp
index 6fa5402096a6a..dc58b54274257 100644
--- a/llvm/lib/Target/ARM/MVETPAndVPTOptimisationsPass.cpp
+++ b/llvm/lib/Target/ARM/MVETPAndVPTOptimisationsPass.cpp
@@ -40,6 +40,11 @@ MergeEndDec("arm-enable-merge-loopenddec", cl::Hidden,
     cl::desc("Enable merging Loop End and Dec instructions."),
     cl::init(true));
 
+static cl::opt<bool>
+SetLRPredicate("arm-set-lr-predicate", cl::Hidden,
+    cl::desc("Enable setting lr as a predicate in tail predication regions."),
+    cl::init(true));
+
 namespace {
 class MVETPAndVPTOptimisations : public MachineFunctionPass {
 public:
@@ -434,10 +439,14 @@ bool MVETPAndVPTOptimisations::ConvertTailPredLoop(MachineLoop *ML,
     return false;
 
   SmallVector<MachineInstr *, 4> VCTPs;
-  for (MachineBasicBlock *BB : ML->blocks())
+  SmallVector<MachineInstr *, 4> MVEInstrs;
+  for (MachineBasicBlock *BB : ML->blocks()) {
     for (MachineInstr &MI : *BB)
       if (isVCTP(&MI))
         VCTPs.push_back(&MI);
+      else if (findFirstVPTPredOperandIdx(MI) != -1)
+        MVEInstrs.push_back(&MI);
+  }
 
   if (VCTPs.empty()) {
     LLVM_DEBUG(dbgs() << "  no VCTPs\n");
@@ -510,6 +519,16 @@ bool MVETPAndVPTOptimisations::ConvertTailPredLoop(MachineLoop *ML,
   MRI->constrainRegClass(CountReg, &ARM::rGPRRegClass);
   LoopStart->eraseFromParent();
 
+  if (SetLRPredicate) {
+    // Each instruction in the loop needs to be using LR as the predicate from
+    // the Phi as the predicate.
+    Register LR = LoopPhi->getOperand(0).getReg();
+    for (MachineInstr *MI : MVEInstrs) {
+      int Idx = findFirstVPTPredOperandIdx(*MI);
+      MI->getOperand(Idx + 2).setReg(LR);
+    }
+  }
+
   return true;
 }
 
@@ -991,6 +1010,7 @@ bool MVETPAndVPTOptimisations::ConvertVPSEL(MachineBasicBlock &MBB) {
             .add(MI.getOperand(1))
             .addImm(ARMVCC::Then)
             .add(MI.getOperand(4))
+            .add(MI.getOperand(5))
             .add(MI.getOperand(2));
     // Silence unused variable warning in release builds.
     (void)MIBuilder;

diff  --git a/llvm/test/CodeGen/ARM/machine-outliner-unoutlinable.mir b/llvm/test/CodeGen/ARM/machine-outliner-unoutlinable.mir
index 43c06333017f4..da032019c5683 100644
--- a/llvm/test/CodeGen/ARM/machine-outliner-unoutlinable.mir
+++ b/llvm/test/CodeGen/ARM/machine-outliner-unoutlinable.mir
@@ -149,17 +149,17 @@ body:             |
   ; CHECK-NOT: tBL 14 /* CC::al */, $noreg, @OUTLINED_FUNCTION
   bb.0:
     liveins: $r3, $r4, $q0, $q3, $q4, $q5
-    $q5 = MVE_VDUP32 $r3, 0, $noreg, $q5
-    $q4 = MVE_VDUP32 $r4, 0, $noreg, $q4
-    $q0 = MVE_VADDf32 $q4, $q5, 0, $noreg, $q0
+    $q5 = MVE_VDUP32 $r3, 0, $noreg, $noreg, $q5
+    $q4 = MVE_VDUP32 $r4, 0, $noreg, $noreg, $q4
+    $q0 = MVE_VADDf32 $q4, $q5, 0, $noreg, $noreg, $q0
     $lr = t2DoLoopStart $r4
     $r0 = MVE_VMOV_from_lane_32 renamable $q0, 1, 14, $noreg
     tBL 14, $noreg, @z
   bb.1:
     liveins: $r3, $r4, $q0, $q3, $q4, $q5
-    $q5 = MVE_VDUP32 $r3, 0, $noreg, $q5
-    $q4 = MVE_VDUP32 $r4, 0, $noreg, $q4
-    $q0 = MVE_VADDf32 $q4, $q5, 0, $noreg, $q0
+    $q5 = MVE_VDUP32 $r3, 0, $noreg, $noreg, $q5
+    $q4 = MVE_VDUP32 $r4, 0, $noreg, $noreg, $q4
+    $q0 = MVE_VADDf32 $q4, $q5, 0, $noreg, $noreg, $q0
     $lr = t2DoLoopStart $r4
     $r0 = MVE_VMOV_from_lane_32 renamable $q0, 1, 14, $noreg
     tBL 14, $noreg, @z

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/add_reduce.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/add_reduce.mir
index 4b39743fc79da..ecaf68d90a954 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/add_reduce.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/add_reduce.mir
@@ -167,28 +167,28 @@ body:             |
   ; CHECK:   $r6, $r5 = t2LDRDi8 $sp, 40, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.4, align 8), (load (s32) from %fixed-stack.5)
   ; CHECK:   $r4 = tMOVr killed $r7, 14 /* CC::al */, $noreg
   ; CHECK:   $r7, $r8 = t2LDRDi8 $sp, 24, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0, align 8), (load (s32) from %fixed-stack.1)
-  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r5, dead $cpsr = tSUBi3 killed renamable $r7, 4, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.for.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5, $r8, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr :: (load (s128) from %ir.input_2_cast, align 4)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr :: (load (s128) from %ir.input_1_cast, align 4)
-  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, undef renamable $q3
+  ; CHECK:   renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
+  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, $noreg, undef renamable $q3
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   renamable $r4, dead $cpsr = tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r8, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r8, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 1, renamable $vpr, undef renamable $q2
-  ; CHECK:   renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 1, killed renamable $vpr, undef renamable $q2
-  ; CHECK:   renamable $r6 = MVE_VADDVu32no_acc killed renamable $q2, 0, $noreg
+  ; CHECK:   renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 1, killed renamable $vpr, $noreg, undef renamable $q2
+  ; CHECK:   renamable $r6 = MVE_VADDVu32no_acc killed renamable $q2, 0, $noreg, $noreg
   ; CHECK:   early-clobber renamable $r5 = t2STR_PRE killed renamable $r6, killed renamable $r5, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.scevgep2)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
@@ -219,30 +219,30 @@ body:             |
     $r6, $r5 = t2LDRDi8 $sp, 40, 14, $noreg :: (load (s32) from %fixed-stack.2, align 8), (load (s32) from %fixed-stack.1)
     $r4 = tMOVr killed $r7, 14, $noreg
     $r7, $r8 = t2LDRDi8 $sp, 24, 14, $noreg :: (load (s32) from %fixed-stack.6, align 8), (load (s32) from %fixed-stack.5)
-    renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, undef renamable $q0
-    renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, $noreg, undef renamable $q1
     renamable $r5, dead $cpsr = tSUBi3 killed renamable $r7, 4, 14, $noreg
 
   bb.2.for.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5, $r8, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr :: (load (s128) from %ir.input_2_cast, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr :: (load (s128) from %ir.input_1_cast, align 4)
-    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, undef renamable $q2
-    renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, undef renamable $q3
+    renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
+    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, $noreg, undef renamable $q3
     $lr = tMOVr $r4, 14, $noreg
-    renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
     renamable $r4, dead $cpsr = tSUBi8 killed $r4, 1, 14, $noreg
-    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r8, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r8, 0, $noreg, $noreg, undef renamable $q2
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 1, renamable $vpr, undef renamable $q2
-    renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 1, killed renamable $vpr, undef renamable $q2
-    renamable $r6 = MVE_VADDVu32no_acc killed renamable $q2, 0, $noreg
+    renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 1, killed renamable $vpr, $noreg, undef renamable $q2
+    renamable $r6 = MVE_VADDVu32no_acc killed renamable $q2, 0, $noreg, $noreg
     early-clobber renamable $r5 = t2STR_PRE killed renamable $r6, killed renamable $r5, 4, 14, $noreg :: (store (s32) into %ir.scevgep2)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/begin-vpt-without-inst.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/begin-vpt-without-inst.mir
index 3f8f9eb6d91eb..5be0eaf778cad 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/begin-vpt-without-inst.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/begin-vpt-without-inst.mir
@@ -64,18 +64,18 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0
   ; CHECK:   renamable $r1 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 3, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from constant-pool, align 8)
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 3, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool, align 8)
   ; CHECK:   $r1 = t2MOVi16 target-flags(arm-lo16) @arr, 14 /* CC::al */, $noreg
   ; CHECK:   $r1 = t2MOVTi16 killed $r1, target-flags(arm-hi16) @arr, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCMPu32 killed renamable $q0, killed renamable $q1, 8, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 2, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $vpr = MVE_VCMPu32 killed renamable $q0, killed renamable $q1, 8, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 2, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK: bb.2.vector.ph:
   ; CHECK:   successors: %bb.3(0x04000000), %bb.2(0x7c000000)
   ; CHECK:   liveins: $vpr, $q0, $r0, $r1
   ; CHECK:   renamable $r0, $cpsr = tADDi8 killed renamable $r0, 1, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   MVE_VSTRWU32 renamable $q0, renamable $r1, 0, 1, renamable $vpr :: (store (s128) into `<4 x i32>* bitcast ([0 x i32]* @arr to <4 x i32>*)`, align 4)
+  ; CHECK:   MVE_VSTRWU32 renamable $q0, renamable $r1, 0, 1, renamable $vpr, $noreg :: (store (s128) into `<4 x i32>* bitcast ([0 x i32]* @arr to <4 x i32>*)`, align 4)
   ; CHECK:   tBcc %bb.2, 3 /* CC::lo */, killed $cpsr
   ; CHECK: bb.3.for.end5:
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg
@@ -92,12 +92,12 @@ body:             |
     liveins: $r0
 
     renamable $r1 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 3, 0, $noreg, undef renamable $q0
-    renamable $q1 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from constant-pool, align 8)
+    renamable $q0 = MVE_VMOVimmi32 3, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool, align 8)
     $r1 = t2MOVi16 target-flags(arm-lo16) @arr, 14 /* CC::al */, $noreg
     $r1 = t2MOVTi16 killed $r1, target-flags(arm-hi16) @arr, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCMPu32 killed renamable $q0, killed renamable $q1, 8, 0, $noreg
-    renamable $q0 = MVE_VMOVimmi32 2, 0, $noreg, undef renamable $q0
+    renamable $vpr = MVE_VCMPu32 killed renamable $q0, killed renamable $q1, 8, 0, $noreg, $noreg
+    renamable $q0 = MVE_VMOVimmi32 2, 0, $noreg, $noreg, undef renamable $q0
 
   bb.2.vector.ph:
     successors: %bb.3(0x04000000), %bb.2(0x7c000000)
@@ -105,7 +105,7 @@ body:             |
 
     renamable $r0, $cpsr = tADDi8 killed renamable $r0, 1, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 renamable $q0, renamable $r1, 0, 1, renamable $vpr :: (store (s128) into `<4 x i32>* bitcast ([0 x i32]* @arr to <4 x i32>*)`, align 4)
+    MVE_VSTRWU32 renamable $q0, renamable $r1, 0, 1, renamable $vpr, $noreg :: (store (s128) into `<4 x i32>* bitcast ([0 x i32]* @arr to <4 x i32>*)`, align 4)
     tBcc %bb.2, 3 /* CC::lo */, killed $cpsr
 
   bb.3.for.end5:

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/cmplx_cong.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/cmplx_cong.mir
index c56a8a239ddc5..ac47bb0dc4740 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/cmplx_cong.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/cmplx_cong.mir
@@ -46,14 +46,14 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tLSLri killed renamable $r2, 1, 14 /* CC::al */, $noreg
   ; CHECK:   $r4 = t2MOVi16 target-flags(arm-lo16) @arm_cmplx_conj_f32_mve.cmplx_conj_sign, 14 /* CC::al */, $noreg
   ; CHECK:   $r4 = t2MOVTi16 killed $r4, target-flags(arm-hi16) @arm_cmplx_conj_f32_mve.cmplx_conj_sign, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VLDRWU32 killed renamable $r4, 0, 0, $noreg
+  ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VLDRWU32 killed renamable $r4, 0, 0, $noreg, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
@@ -77,7 +77,7 @@ body:             |
     renamable $r2, dead $cpsr = tADDi8 killed renamable $r2, 3, 14 /* CC::al */, $noreg
     renamable $lr = t2MOVi 1, 14 /* CC::al */, $noreg, $noreg
     $r4 = t2MOVTi16 killed $r4, target-flags(arm-hi16) @arm_cmplx_conj_f32_mve.cmplx_conj_sign, 14 /* CC::al */, $noreg
-    renamable $q0 = nnan ninf nsz MVE_VLDRWU32 killed renamable $r4, 0, 0, $noreg
+    renamable $q0 = nnan ninf nsz MVE_VLDRWU32 killed renamable $r4, 0, 0, $noreg, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $lr, killed renamable $r2, 19, 14 /* CC::al */, $noreg, $noreg
     $lr = t2DoLoopStart renamable $lr
 
@@ -85,11 +85,11 @@ body:             |
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, undef renamable $q1
-    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr
+    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg
+    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q1
+    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr, $noreg
     renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
     renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/count_dominates_start.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/count_dominates_start.mir
index 939d3978b8601..7ba9a691663f9 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/count_dominates_start.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/count_dominates_start.mir
@@ -134,12 +134,12 @@ body:             |
   ; CHECK:   [[PHI2:%[0-9]+]]:tgpreven = PHI [[COPY5]], %bb.2, %8, %bb.3
   ; CHECK:   [[PHI3:%[0-9]+]]:gprlr = PHI [[t2DoLoopStartTP]], %bb.2, %33, %bb.3
   ; CHECK:   [[PHI4:%[0-9]+]]:rgpr = PHI [[COPY6]], %bb.2, %7, %bb.3
-  ; CHECK:   [[MVE_VCTP16_:%[0-9]+]]:vccr = MVE_VCTP16 [[PHI4]], 0, $noreg
+  ; CHECK:   [[MVE_VCTP16_:%[0-9]+]]:vccr = MVE_VCTP16 [[PHI4]], 0, $noreg, $noreg
   ; CHECK:   [[t2SUBri1:%[0-9]+]]:rgpr = t2SUBri [[PHI4]], 8, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   [[COPY7:%[0-9]+]]:gpr = COPY [[t2SUBri1]]
-  ; CHECK:   [[MVE_VLDRHU16_post:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post1:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[PHI]], 16, 1, [[MVE_VCTP16_]] :: (load (s128) from %ir.lsr.iv35, align 2)
-  ; CHECK:   [[MVE_VLDRHU16_post2:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post3:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[PHI1]], 16, 1, [[MVE_VCTP16_]] :: (load (s128) from %ir.lsr.iv12, align 2)
-  ; CHECK:   [[MVE_VMLADAVas16_:%[0-9]+]]:tgpreven = MVE_VMLADAVas16 [[PHI2]], killed [[MVE_VLDRHU16_post3]], killed [[MVE_VLDRHU16_post1]], 1, [[MVE_VCTP16_]]
+  ; CHECK:   [[MVE_VLDRHU16_post:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post1:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[PHI]], 16, 1, [[MVE_VCTP16_]], [[PHI3]] :: (load (s128) from %ir.lsr.iv35, align 2)
+  ; CHECK:   [[MVE_VLDRHU16_post2:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post3:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[PHI1]], 16, 1, [[MVE_VCTP16_]], [[PHI3]] :: (load (s128) from %ir.lsr.iv12, align 2)
+  ; CHECK:   [[MVE_VMLADAVas16_:%[0-9]+]]:tgpreven = MVE_VMLADAVas16 [[PHI2]], killed [[MVE_VLDRHU16_post3]], killed [[MVE_VLDRHU16_post1]], 1, [[MVE_VCTP16_]], [[PHI3]]
   ; CHECK:   [[COPY8:%[0-9]+]]:gpr = COPY [[MVE_VMLADAVas16_]]
   ; CHECK:   [[COPY9:%[0-9]+]]:gpr = COPY [[MVE_VLDRHU16_post2]]
   ; CHECK:   [[COPY10:%[0-9]+]]:gpr = COPY [[MVE_VLDRHU16_post]]
@@ -189,12 +189,12 @@ body:             |
     %4:tgpreven = PHI %23, %bb.1, %8, %bb.2
     %5:gprlr = PHI %1, %bb.1, %11, %bb.2
     %6:rgpr = PHI %35, %bb.1, %7, %bb.2
-    %26:vccr = MVE_VCTP16 %6, 0, $noreg
+    %26:vccr = MVE_VCTP16 %6, 0, $noreg, $noreg
     %27:rgpr = t2SUBri %6, 8, 14 /* CC::al */, $noreg, $noreg
     %7:gpr = COPY %27
-    %28:rgpr, %29:mqpr = MVE_VLDRHU16_post %2, 16, 1, %26 :: (load (s128) from %ir.lsr.iv35, align 2)
-    %30:rgpr, %31:mqpr = MVE_VLDRHU16_post %3, 16, 1, %26 :: (load (s128) from %ir.lsr.iv12, align 2)
-    %32:tgpreven = MVE_VMLADAVas16 %4, killed %31, killed %29, 1, %26
+    %28:rgpr, %29:mqpr = MVE_VLDRHU16_post %2, 16, 1, %26, $noreg :: (load (s128) from %ir.lsr.iv35, align 2)
+    %30:rgpr, %31:mqpr = MVE_VLDRHU16_post %3, 16, 1, %26, $noreg :: (load (s128) from %ir.lsr.iv12, align 2)
+    %32:tgpreven = MVE_VMLADAVas16 %4, killed %31, killed %29, 1, %26, $noreg
     %8:gpr = COPY %32
     %9:gpr = COPY %30
     %10:gpr = COPY %28

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/ctlz-non-zeros.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/ctlz-non-zeros.mir
index afc2916ddb916..f3a731317d85a 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/ctlz-non-zeros.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/ctlz-non-zeros.mir
@@ -176,17 +176,17 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r4
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 2)
-  ; CHECK:   renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 2)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 2)
+  ; CHECK:   renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 2)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VCLZs8 killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VCLZs8 killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $r0 = tMOVr $r1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg
+  ; CHECK:   renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 2)
+  ; CHECK:   renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 2)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -215,18 +215,18 @@ body:             |
     liveins: $r0, $r1, $r2, $r3, $r4
 
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 2)
-    renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 2)
+    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 2)
+    renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 2)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
     renamable $r4, dead $cpsr = tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VCLZs8 killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VCLZs8 killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r1, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg
+    renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 2)
+    renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 2)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
@@ -283,16 +283,16 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r4, $r12
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q1 = MVE_VCLZs16 killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg
+  ; CHECK:   renamable $q1 = MVE_VCLZs16 killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r4
@@ -322,17 +322,17 @@ body:             |
     liveins: $r0, $r1, $r2, $r3, $r12
 
     $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
     renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
-    renamable $q1 = MVE_VCLZs16 killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VCLZs16 killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg
+    renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+    renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
@@ -389,16 +389,16 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r4, $r12
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
-  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
+  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q1 = MVE_VCLZs32 killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VQSHRUNs32th killed renamable $q0, killed renamable $q1, 3, 0, $noreg
+  ; CHECK:   renamable $q1 = MVE_VCLZs32 killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VQSHRUNs32th killed renamable $q0, killed renamable $q1, 3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r4
@@ -428,17 +428,17 @@ body:             |
     liveins: $r0, $r1, $r2, $r3, $r12
 
     $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
-    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
+    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
     renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
-    renamable $q1 = MVE_VCLZs32 killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VCLZs32 killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VQSHRUNs32th killed renamable $q0, killed renamable $q1, 3, 0, $noreg
+    renamable $q0 = MVE_VQSHRUNs32th killed renamable $q0, killed renamable $q1, 3, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+    renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/disjoint-vcmp.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/disjoint-vcmp.mir
index 266e8c7e3c358..d71a8299db1a8 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/disjoint-vcmp.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/disjoint-vcmp.mir
@@ -145,21 +145,21 @@ body:             |
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 16, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   VSTR_P0_off killed renamable $vpr, $sp, 0, 14 /* CC::al */, $noreg :: (store (s32) into %stack.0)
-  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $r3 = tMOVr $r0, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.bb9:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r12
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-  ; CHECK:   renamable $r3, renamable $q2 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+  ; CHECK:   renamable $r3, renamable $q2 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $r12, renamable $q2 = MVE_VLDRWU32_pre killed renamable $r12, 16, 0, $noreg :: (load (s128) from %ir.scevgep2, align 8)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r12, renamable $q2 = MVE_VLDRWU32_pre killed renamable $r12, 16, 0, $noreg, $noreg :: (load (s128) from %ir.scevgep2, align 8)
   ; CHECK:   MVE_VPTv4u32 8, renamable $q0, killed renamable $q2, 2, implicit-def $vpr
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.bb27:
@@ -194,7 +194,7 @@ body:             |
     renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $r12, 19, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 16, 14, $noreg, $noreg
     VSTR_P0_off killed renamable $vpr, $sp, 0, 14, $noreg :: (store (s32) into %stack.0)
-    renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r5, 0, $noreg, $noreg, undef renamable $q0
     $r3 = tMOVr $r0, 14, $noreg
     $lr = t2DoLoopStart renamable $lr
 
@@ -204,14 +204,14 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-    renamable $r3, renamable $q2 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+    renamable $r3, renamable $q2 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-    renamable $r12, renamable $q2 = MVE_VLDRWU32_pre killed renamable $r12, 16, 0, $noreg :: (load (s128) from %ir.scevgep2, align 8)
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+    renamable $r12, renamable $q2 = MVE_VLDRWU32_pre killed renamable $r12, 16, 0, $noreg, $noreg :: (load (s128) from %ir.scevgep2, align 8)
     MVE_VPTv4u32 8, renamable $q0, killed renamable $q2, 2, implicit-def $vpr
-    MVE_VSTRWU32 killed renamable $q1, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+    MVE_VSTRWU32 killed renamable $q1, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-ignore-vctp.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-ignore-vctp.mir
index 737bc008e7621..9155607eb9df9 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-ignore-vctp.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-ignore-vctp.mir
@@ -102,14 +102,14 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   renamable $r3, dead $cpsr = tLSLri killed renamable $r2, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
   ; CHECK: bb.1.do.body (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg, $lr
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $lr, undef renamable $q1
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg, $lr
   ; CHECK:   renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
@@ -135,18 +135,18 @@ body:             |
     renamable $r2, dead $cpsr = tMOVi8 1, 14, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r2, killed renamable $r12, 19, 14, $noreg, $noreg
     renamable $r2 = tLEApcrel %const.0, 14, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     $lr = t2DoLoopStart renamable $lr
 
   bb.1.do.body (align 4):
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $lr
     MVE_VPST 2, implicit $vpr
-    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, undef renamable $q1
-    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr
+    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $lr
+    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, $lr, undef renamable $q1
+    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr, $lr
     renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-remove-loop-update.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-remove-loop-update.mir
index b8301ab46d186..56bb50a43ab29 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-remove-loop-update.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/dont-remove-loop-update.mir
@@ -126,14 +126,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $lr :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $lr :: (load (s128) from %ir.lsr.iv1416, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $lr, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $lr :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
@@ -167,14 +167,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $lr :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $lr :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $lr, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $lr :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/emptyblock.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/emptyblock.mir
index 48e82fdec3786..90ad1743f9ad8 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/emptyblock.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/emptyblock.mir
@@ -412,18 +412,18 @@ body:             |
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tLDRi killed renamable $r0, 1, 14 /* CC::al */, $noreg :: (load (s32) from %ir.NumFilters)
     $r0 = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $r1 = tMOVr $r5, 14 /* CC::al */, $noreg
     $lr = t2DoLoopStart renamable $lr
   
   bb.1.do.body (align 4):
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r11
-  
-    renamable $vpr = MVE_VCTP32 renamable $r0, 0, $noreg
+
+    renamable $vpr = MVE_VCTP32 renamable $r0, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.pInT.033, align 4)
-    renamable $q0 = MVE_VADDf32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, undef renamable $q0
+    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pInT.033, align 4)
+    renamable $q0 = MVE_VADDf32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, undef renamable $q0
     renamable $r0, dead $cpsr = tSUBi8 killed renamable $r0, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.1, implicit-def dead $cpsr
@@ -484,21 +484,21 @@ body:             |
     tSTRspi renamable $r7, $sp, 9, 14 /* CC::al */, $noreg :: (store (s32) into %stack.0)
     renamable $r9 = t2ADDri renamable $r0, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r7, dead $cpsr = tMUL renamable $r4, killed renamable $r7, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r5, 0, 0, $noreg :: (load (s128) from %ir.39, align 4)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r5, 0, 0, $noreg, $noreg :: (load (s128) from %ir.39, align 4)
     renamable $r3 = t2ADDrs renamable $r11, killed renamable $r3, 18, 14 /* CC::al */, $noreg, $noreg
     renamable $r5 = t2MUL renamable $r8, renamable $r4, 14 /* CC::al */, $noreg
     renamable $r4 = t2MUL renamable $r9, killed renamable $r4, 14 /* CC::al */, $noreg
     renamable $r7 = t2ADDrs renamable $r11, killed renamable $r7, 18, 14 /* CC::al */, $noreg, $noreg
     renamable $r5 = t2ADDrs renamable $r11, killed renamable $r5, 18, 14 /* CC::al */, $noreg, $noreg
     renamable $r4 = t2ADDrs killed renamable $r11, killed renamable $r4, 18, 14 /* CC::al */, $noreg, $noreg
-    renamable $q1 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %ir.41, align 4)
-    renamable $q3 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q3
-    renamable $q1 = MVE_VLDRWU32 killed renamable $r7, 0, 0, $noreg :: (load (s128) from %ir.44, align 4)
-    renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q2
-    renamable $q1 = MVE_VLDRWU32 killed renamable $r5, 0, 0, $noreg :: (load (s128) from %ir.47, align 4)
-    renamable $q1 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-    renamable $q4 = MVE_VLDRWU32 killed renamable $r4, 0, 0, $noreg :: (load (s128) from %ir.50, align 4)
-    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q4, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %ir.41, align 4)
+    renamable $q3 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q3
+    renamable $q1 = MVE_VLDRWU32 killed renamable $r7, 0, 0, $noreg, $noreg :: (load (s128) from %ir.44, align 4)
+    renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q1 = MVE_VLDRWU32 killed renamable $r5, 0, 0, $noreg, $noreg :: (load (s128) from %ir.47, align 4)
+    renamable $q1 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
+    renamable $q4 = MVE_VLDRWU32 killed renamable $r4, 0, 0, $noreg, $noreg :: (load (s128) from %ir.50, align 4)
+    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VMULf32 killed renamable $q4, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LDRi12 $sp, 8, 14 /* CC::al */, $noreg :: (load (s32) from %stack.7)
     $r3 = tMOVr $r10, 14 /* CC::al */, $noreg
     $r5 = tMOVr $r1, 14 /* CC::al */, $noreg
@@ -510,16 +510,16 @@ body:             |
   bb.5.do.body24 (align 4):
     successors: %bb.5(0x7c000000), %bb.6(0x04000000)
     liveins: $lr, $q0, $q1, $q2, $q3, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r9, $r10, $r11, $r12
-  
-    renamable $r11, renamable $q4 = MVE_VLDRWU32_post killed renamable $r11, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv4, align 4)
-    renamable $r7, renamable $q5 = MVE_VLDRWU32_post killed renamable $r7, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv911, align 4)
-    renamable $q3 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q3, renamable $q4, killed renamable $q5, 0, $noreg
-    renamable $r4, renamable $q5 = MVE_VLDRWU32_post killed renamable $r4, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv1618, align 4)
-    renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q2, renamable $q4, killed renamable $q5, 0, $noreg
-    renamable $r5, renamable $q5 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv2325, align 4)
-    renamable $q1 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q1, renamable $q4, killed renamable $q5, 0, $noreg
-    renamable $r3, renamable $q5 = MVE_VLDRWU32_post killed renamable $r3, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv3032, align 4)
-    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q0, killed renamable $q4, killed renamable $q5, 0, $noreg
+
+    renamable $r11, renamable $q4 = MVE_VLDRWU32_post killed renamable $r11, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv4, align 4)
+    renamable $r7, renamable $q5 = MVE_VLDRWU32_post killed renamable $r7, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv911, align 4)
+    renamable $q3 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q3, renamable $q4, killed renamable $q5, 0, $noreg, $noreg
+    renamable $r4, renamable $q5 = MVE_VLDRWU32_post killed renamable $r4, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv1618, align 4)
+    renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q2, renamable $q4, killed renamable $q5, 0, $noreg, $noreg
+    renamable $r5, renamable $q5 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv2325, align 4)
+    renamable $q1 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q1, renamable $q4, killed renamable $q5, 0, $noreg, $noreg
+    renamable $r3, renamable $q5 = MVE_VLDRWU32_post killed renamable $r3, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv3032, align 4)
+    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q0, killed renamable $q4, killed renamable $q5, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.5, implicit-def dead $cpsr
     tB %bb.6, 14 /* CC::al */, $noreg
@@ -587,7 +587,7 @@ body:             |
     liveins: $r0, $r2, $r3, $r4, $r5, $r11, $r12
   
     renamable $r1 = t2MUL renamable $r0, renamable $r4, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r1 = t2ADDrs renamable $r11, killed renamable $r1, 18, 14 /* CC::al */, $noreg, $noreg
     $r6 = tMOVr $r4, 14 /* CC::al */, $noreg
     $r7 = tMOVr $r5, 14 /* CC::al */, $noreg
@@ -601,12 +601,12 @@ body:             |
   bb.10.do.body59 (align 4):
     successors: %bb.10(0x7c000000), %bb.11(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r11, $r12
-  
-    renamable $vpr = MVE_VCTP32 renamable $r6, 0, $noreg
+
+    renamable $vpr = MVE_VCTP32 renamable $r6, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r7, renamable $q1 = MVE_VLDRWU32_post killed renamable $r7, 16, 1, renamable $vpr :: (load (s128) from %ir.pInT.21, align 4)
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.pCos0.12, align 4)
-    renamable $q0 = MVE_VFMAf32 killed renamable $q0, killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr
+    renamable $r7, renamable $q1 = MVE_VLDRWU32_post killed renamable $r7, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pInT.21, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pCos0.12, align 4)
+    renamable $q0 = MVE_VFMAf32 killed renamable $q0, killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg
     renamable $r6, dead $cpsr = tSUBi8 killed renamable $r6, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.10, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/extract-element.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/extract-element.mir
index 5a328b961f9b6..6955007957f4e 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/extract-element.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/extract-element.mir
@@ -118,15 +118,15 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_offset 8
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, killed $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, killed $noreg, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
@@ -150,7 +150,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r7, -8
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14, $noreg
@@ -162,15 +162,15 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r3, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-16.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-16.mir
index d3547f6f84875..4d3f0ac5f92d2 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-16.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-16.mir
@@ -116,14 +116,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRHU16_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRHU16_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 7, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRHU16_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRHU16_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -154,14 +154,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRHU16_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRHU16_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 7, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRHU16_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRHU16_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-32.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-32.mir
index d011ca8dc9764..7ea07bd639560 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-32.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-32.mir
@@ -124,14 +124,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 5, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -162,14 +162,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 5, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-8.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-8.mir
index 8bbe08efdbeea..eb578318c3ac8 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-8.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/incorrect-sub-8.mir
@@ -117,14 +117,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 15, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -155,14 +155,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 15, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-1.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-1.mir
index e5dd18ace1c0f..d9b8ca242ee63 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-1.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-1.mir
@@ -149,29 +149,29 @@ body:             |
   ; CHECK:   renamable $lr = t2BICri killed renamable $lr, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r5 = tLDRspi $sp, 4, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0, align 8)
   ; CHECK:   renamable $lr = t2SUBri killed renamable $lr, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   $r4 = tMOVr killed $lr, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.cast.e, align 4)
+  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.cast.e, align 4)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $r5, def $r7, def $pc
@@ -198,7 +198,7 @@ body:             |
     renamable $lr = t2BICri killed renamable $lr, 3, 14, $noreg, $noreg
     renamable $r5 = tLDRspi $sp, 4, 14, $noreg :: (load (s32) from %fixed-stack.1, align 8)
     renamable $lr = t2SUBri killed renamable $lr, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $lr
     $r4 = tMOVr killed $lr, 14, $noreg
@@ -207,23 +207,23 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
     $lr = tMOVr $r4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14, $noreg
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.cast.e, align 4)
+    renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.cast.e, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-2.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-2.mir
index d4694eefb3323..92d59989d955d 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-2.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-2.mir
@@ -149,29 +149,29 @@ body:             |
   ; CHECK:   renamable $lr = t2BICri killed renamable $lr, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r5 = tLDRspi $sp, 4, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0, align 8)
   ; CHECK:   renamable $lr = t2SUBri killed renamable $lr, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   $r4 = tMOVr killed $lr, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, killed renamable $vpr
-  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.cast.e, align 4)
+  ; CHECK:   renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.cast.e, align 4)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $r5, def $r7, def $pc
@@ -198,7 +198,7 @@ body:             |
     renamable $lr = t2BICri killed renamable $lr, 3, 14, $noreg, $noreg
     renamable $r5 = tLDRspi $sp, 4, 14, $noreg :: (load (s32) from %fixed-stack.1, align 8)
     renamable $lr = t2SUBri killed renamable $lr, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $lr
     $r4 = tMOVr killed $lr, 14, $noreg
@@ -207,23 +207,23 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
     $lr = tMOVr $r4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14, $noreg
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 4, implicit $vpr
-    renamable $vpr = MVE_VPNOT renamable $vpr, 0, killed renamable $vpr
-    renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.cast.e, align 4)
+    renamable $vpr = MVE_VPNOT renamable $vpr, 0, killed renamable $vpr, $noreg
+    renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.cast.e, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-3.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-3.mir
index 102f5b9d6ce00..2a8aa845a06b7 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-3.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpnot-3.mir
@@ -149,29 +149,29 @@ body:             |
   ; CHECK:   renamable $lr = t2BICri killed renamable $lr, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r5 = tLDRspi $sp, 4, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0, align 8)
   ; CHECK:   renamable $lr = t2SUBri killed renamable $lr, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   $r4 = tMOVr killed $lr, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, renamable $vpr, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, renamable $vpr, undef renamable $q0
-  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, renamable $vpr :: (store (s128) into %ir.lsr.cast.e, align 4)
-  ; CHECK:   dead renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, renamable $vpr, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, renamable $vpr, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, renamable $vpr, $noreg :: (store (s128) into %ir.lsr.cast.e, align 4)
+  ; CHECK:   dead renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $r5, def $r7, def $pc
@@ -198,7 +198,7 @@ body:             |
     renamable $lr = t2BICri killed renamable $lr, 3, 14, $noreg, $noreg
     renamable $r5 = tLDRspi $sp, 4, 14, $noreg :: (load (s32) from %fixed-stack.1, align 8)
     renamable $lr = t2SUBri killed renamable $lr, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $lr
     $r4 = tMOVr killed $lr, 14, $noreg
@@ -207,23 +207,23 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r4, 14, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14, $noreg
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, renamable $vpr, undef renamable $q1
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, renamable $vpr, undef renamable $q0
-    renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, renamable $vpr :: (store (s128) into %ir.lsr.cast.e, align 4)
-    renamable $vpr = MVE_VPNOT renamable $vpr, 0, $noreg
+    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, renamable $vpr, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, renamable $vpr, $noreg, undef renamable $q0
+    renamable $r5 = MVE_VSTRWU32_post renamable $q0, killed renamable $r5, 16, 1, renamable $vpr, $noreg :: (store (s128) into %ir.lsr.cast.e, align 4)
+    renamable $vpr = MVE_VPNOT renamable $vpr, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-1.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-1.mir
index 6df65dab7bd45..46a011d2e3003 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-1.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-1.mir
@@ -145,7 +145,7 @@ body:             |
   ; CHECK:   renamable $lr = t2ADDri renamable $r12, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $lr = t2BICri killed renamable $lr, 3, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $lr = t2SUBri killed renamable $lr, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r5 = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   dead $lr = t2DLS renamable $r5
@@ -153,25 +153,25 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $r5, def $r7, def $pc, implicit killed $r0
   ; CHECK: bb.4:
   ; CHECK:   renamable $r0, dead $cpsr = tMOVi8 0, 14 /* CC::al */, $noreg
@@ -197,7 +197,7 @@ body:             |
     renamable $lr = t2ADDri renamable $r12, 3, 14, $noreg, $noreg
     renamable $r4, dead $cpsr = tMOVi8 1, 14, $noreg
     renamable $lr = t2BICri killed renamable $lr, 3, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2SUBri killed renamable $lr, 4, 14, $noreg, $noreg
     renamable $r5 = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $r5
@@ -207,21 +207,21 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
     $lr = tMOVr $r4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
+    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -229,7 +229,7 @@ body:             |
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r4, def $r5, def $r7, def $pc, implicit killed $r0
 
   bb.4:

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-2.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-2.mir
index 84bc9f184a81e..dd9fc35b54e16 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-2.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/inloop-vpsel-2.mir
@@ -147,7 +147,7 @@ body:             |
   ; CHECK:   renamable $lr = t2ADDri renamable $r12, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $lr = t2BICri killed renamable $lr, 3, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $lr = t2SUBri killed renamable $lr, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r5 = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   dead $lr = t2DLS renamable $r5
@@ -155,24 +155,24 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $r0, renamable $q4 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q4, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+  ; CHECK:   renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $r0, renamable $q4 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q4, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $r5, def $r7, def $pc, implicit killed $r0
   ; CHECK: bb.4:
   ; CHECK:   renamable $r0, dead $cpsr = tMOVi8 0, 14 /* CC::al */, $noreg
@@ -198,7 +198,7 @@ body:             |
     renamable $lr = t2ADDri renamable $r12, 3, 14, $noreg, $noreg
     renamable $r4, dead $cpsr = tMOVi8 1, 14, $noreg
     renamable $lr = t2BICri killed renamable $lr, 3, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2SUBri killed renamable $lr, 4, 14, $noreg, $noreg
     renamable $r5 = nuw nsw t2ADDrs killed renamable $r4, killed renamable $lr, 19, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $r5
@@ -208,20 +208,20 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3, $r4, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17.d, align 2)
-    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820.c, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
-    renamable $r0, renamable $q4 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q4, 0, $noreg, undef renamable $q2
+    renamable $r3, renamable $q1 = MVE_VLDRHS32_post killed renamable $r3, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17.d, align 2)
+    renamable $r2, renamable $q2 = MVE_VLDRHS32_post killed renamable $r2, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820.c, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q4 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q4, 0, $noreg, $noreg, undef renamable $q2
     $lr = tMOVr $r4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
+    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -229,7 +229,7 @@ body:             |
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r4, def $r5, def $r7, def $pc, implicit killed $r0
 
   bb.4:

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/invariant-qreg.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/invariant-qreg.mir
index 502f9ed24e9a8..2890b722b6b23 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/invariant-qreg.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/invariant-qreg.mir
@@ -162,20 +162,20 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   tCBZ $r2, %bb.3
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 renamable $q0, killed renamable $q2, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post renamable $q1, killed renamable $r1, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.store, align 4)
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 renamable $q0, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post renamable $q1, killed renamable $r1, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.store, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $q0
@@ -191,7 +191,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r7, -8
     renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     tCBZ $r2, %bb.3
 
   bb.1.vector.ph:
@@ -199,7 +199,7 @@ body:             |
     liveins: $q0, $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -211,17 +211,17 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $q1, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-    renamable $q2 = nsw MVE_VMULi32 renamable $q0, killed renamable $q2, 0, $noreg, undef renamable $q2
+    renamable $q2 = nsw MVE_VMULi32 renamable $q0, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 8, implicit $vpr
-    renamable $r1 = MVE_VSTRWU32_post renamable $q1, killed renamable $r1, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.store, align 4)
+    renamable $r1 = MVE_VSTRWU32_post renamable $q1, killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.store, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
@@ -278,19 +278,19 @@ body:             |
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r1 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   dead $lr = t2DLS renamable $r3
   ; CHECK:   $r1 = tMOVr killed $r3, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   $lr = tMOVr $r1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = nsw tSUBi8 killed $r1, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r12 = MVE_VMLADAVu32 renamable $q0, killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r12 = MVE_VMLADAVu32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r12
@@ -320,7 +320,7 @@ body:             |
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r1 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r3
     $r1 = tMOVr killed $r3, 14 /* CC::al */, $noreg
 
@@ -328,13 +328,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     $lr = tMOVr $r1, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = nsw tSUBi8 killed $r1, 1, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r12 = MVE_VMLADAVu32 renamable $q0, killed renamable $q1, 0, $noreg
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r12 = MVE_VMLADAVu32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -396,20 +396,20 @@ body:             |
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r1 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   dead $lr = t2DLS renamable $r3
   ; CHECK:   $r1 = tMOVr killed $r3, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
   ; CHECK:   $lr = tMOVr $r1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r1, dead $cpsr = nsw tSUBi8 killed $r1, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r12
@@ -439,7 +439,7 @@ body:             |
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r1 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r3
     $r1 = tMOVr killed $r3, 14 /* CC::al */, $noreg
 
@@ -447,14 +447,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
     $lr = tMOVr $r1, 14 /* CC::al */, $noreg
-    renamable $q1 = nsw MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r1, dead $cpsr = nsw tSUBi8 killed $r1, 1, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg
+    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain-store.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain-store.mir
index b7a48480ac312..ae13493c4af83 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain-store.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain-store.mir
@@ -150,9 +150,9 @@ body:             |
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   $lr = tMOVr $r2, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = nsw tSUBi8 killed $r2, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.pSrc.addr.02, align 4)
-  ; CHECK:   renamable $q0 = MVE_VMULf32 killed renamable $q0, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 0, killed $noreg :: (store (s128) into %ir.pDst.addr.01, align 4)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.pSrc.addr.02, align 4)
+  ; CHECK:   renamable $q0 = MVE_VMULf32 killed renamable $q0, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.pDst.addr.01, align 4)
   ; CHECK:   dead $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2.do.end:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -182,15 +182,15 @@ body:             |
     liveins: $r0, $r1, $r2, $r12
 
     $lr = tMOVr $r2, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     renamable $r2, dead $cpsr = nsw tSUBi8 killed $r2, 1, 14 /* CC::al */, $noreg
     renamable $r12 = nsw t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.pSrc.addr.02, align 4)
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pSrc.addr.02, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VMULf32 killed renamable $q0, renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMULf32 killed renamable $q0, renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr :: (store (s128) into %ir.pDst.addr.01, align 4)
+    renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.pDst.addr.01, align 4)
     t2LoopEnd killed renamable $lr, %bb.1, implicit-def dead $cpsr
     tB %bb.2, 14 /* CC::al */, $noreg
 
@@ -251,9 +251,9 @@ body:             |
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   $lr = tMOVr $r2, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = nsw tSUBi8 killed $r2, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.pSrc.addr.02, align 4)
-  ; CHECK:   renamable $q0 = MVE_VMULf32 killed renamable $q0, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 0, killed $noreg :: (store (s128) into %ir.pDst.addr.01, align 4)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.pSrc.addr.02, align 4)
+  ; CHECK:   renamable $q0 = MVE_VMULf32 killed renamable $q0, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.pDst.addr.01, align 4)
   ; CHECK:   dead $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2.do.end:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -283,15 +283,15 @@ body:             |
     liveins: $r0, $r1, $r2, $r12
 
     $lr = tMOVr $r2, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     renamable $r2, dead $cpsr = nsw tSUBi8 killed $r2, 1, 14 /* CC::al */, $noreg
     renamable $r12 = nsw t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.pSrc.addr.02, align 4)
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pSrc.addr.02, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VMULf32 killed renamable $q0, renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMULf32 killed renamable $q0, renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr :: (store (s128) into %ir.pDst.addr.01, align 4)
+    renamable $r1 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.pDst.addr.01, align 4)
     t2LoopEnd killed renamable $lr, %bb.1, implicit-def dead $cpsr
     tB %bb.2, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain.mir
index ed0685befdabc..15475c2488f5d 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-chain.mir
@@ -113,14 +113,14 @@ body:             |
   ; CHECK:   dead renamable $r12 = t2ADDri killed renamable $r2, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   dead renamable $r2, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
   ; CHECK: bb.1.do.body (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
@@ -148,18 +148,18 @@ body:             |
     renamable $r2, dead $cpsr = tMOVi8 1, 14, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r2, killed renamable $r12, 19, 14, $noreg, $noreg
     renamable $r2 = tLEApcrel %const.0, 14, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     $lr = t2DoLoopStart renamable $lr
 
   bb.1.do.body (align 4):
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, undef renamable $q1
-    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr
+    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg
+    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q1
+    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr, $noreg
     renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-itercount.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-itercount.mir
index 94019b3464473..f8f7b95948ba1 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-itercount.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-itercount.mir
@@ -105,14 +105,14 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   renamable $r3, dead $cpsr = tLSLri killed renamable $r2, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
   ; CHECK: bb.1.do.body (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
@@ -139,18 +139,18 @@ body:             |
     renamable $r2, dead $cpsr = tMOVi8 1, 14, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r2, killed renamable $r12, 19, 14, $noreg, $noreg
     renamable $r2 = tLEApcrel %const.0, 14, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     $lr = t2DoLoopStart renamable $lr
 
   bb.1.do.body (align 4):
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, undef renamable $q1
-    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr
+    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg
+    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q1
+    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr, $noreg
     renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-mov.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-mov.mir
index 6e73fcbc5a5a2..bac0cf03579fe 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-mov.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-mov.mir
@@ -64,14 +64,14 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 3, 14 /* CC::al */, $noreg
   ; CHECK:   $r12 = tMOVr $r1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r4 = nuw nsw t2ADDrs killed renamable $r4, killed renamable $r3, 19, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $r3 = tMOVr $r0, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r12
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.3(0x7c000000), %bb.4(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r3, 0, 0, $noreg
-  ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VADDf32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r3, 0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VADDf32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nuw tADDi8 killed renamable $r3, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.3
   ; CHECK: bb.4:
@@ -86,16 +86,16 @@ body:             |
   ; CHECK:   renamable $s2 = VUITOS killed renamable $s2, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2DLS killed $r4
   ; CHECK:   renamable $s4 = nnan ninf nsz VDIVS killed renamable $s0, killed renamable $s2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK: bb.5:
   ; CHECK:   successors: %bb.5(0x7c000000), %bb.6(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3, $s4
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   $r4 = VMOVRS $s4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-  ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VSUB_qr_f32 killed renamable $q2, killed renamable $r4, 1, renamable $vpr, undef renamable $q2
-  ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VFMAf32 killed renamable $q0, killed renamable $q2, killed renamable $q2, 1, killed renamable $vpr
+  ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg
+  ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VSUB_qr_f32 killed renamable $q2, killed renamable $r4, 1, renamable $vpr, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VFMAf32 killed renamable $q0, killed renamable $q2, killed renamable $q2, 1, killed renamable $vpr, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.5
@@ -150,7 +150,7 @@ body:             |
     renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 3, 14, $noreg
     $r12 = tMOVr $r1, 14, $noreg
     renamable $r4 = nuw nsw t2ADDrs killed renamable $r4, killed renamable $r3, 19, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $r3 = tMOVr $r0, 14, $noreg
     $lr = t2DoLoopStart renamable $lr
 
@@ -158,11 +158,11 @@ body:             |
     successors: %bb.3(0x7c000000), %bb.4(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 4, implicit $vpr
-    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r3, 0, 1, renamable $vpr
-    renamable $q0 = nnan ninf nsz MVE_VADDf32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, renamable $q0
+    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r3, 0, 1, renamable $vpr, $noreg
+    renamable $q0 = nnan ninf nsz MVE_VADDf32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, renamable $q0
     renamable $r12 = nsw t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = nuw tADDi8 killed renamable $r3, 16, 14, $noreg
     t2LoopEnd renamable $lr, %bb.3, implicit-def dead $cpsr
@@ -181,18 +181,18 @@ body:             |
     renamable $s2 = VUITOS killed renamable $s2, 14, $noreg
     $lr = t2DoLoopStart killed $r4
     renamable $s4 = nnan ninf nsz VDIVS killed renamable $s0, killed renamable $s2, 14, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
 
   bb.5:
     successors: %bb.5(0x7c000000), %bb.6(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $s4
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     $r4 = VMOVRS $s4, 14, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q2 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-    renamable $q2 = nnan ninf nsz MVE_VSUB_qr_f32 killed renamable $q2, killed renamable $r4, 1, renamable $vpr, undef renamable $q2
-    renamable $q0 = nnan ninf nsz MVE_VFMAf32 killed renamable $q0, killed renamable $q2, renamable $q2, 1, killed renamable $vpr
+    renamable $q2 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg
+    renamable $q2 = nnan ninf nsz MVE_VSUB_qr_f32 killed renamable $q2, killed renamable $r4, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q0 = nnan ninf nsz MVE_VFMAf32 killed renamable $q0, killed renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 4, 14, $noreg
     renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-random.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-random.mir
index 9c7b7c8d36a06..a439f5acf0e3e 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-random.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/it-block-random.mir
@@ -114,14 +114,14 @@ body:             |
   ; CHECK:   dead renamable $r12 = t2ADDri killed renamable $r2, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   dead renamable $r2, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
   ; CHECK: bb.1.do.body (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg
-  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 0, killed $noreg, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
@@ -148,18 +148,18 @@ body:             |
     renamable $r2, dead $cpsr = tMOVi8 1, 14, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r2, killed renamable $r12, 19, 14, $noreg, $noreg
     renamable $r2 = tLEApcrel %const.0, 14, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     $lr = t2DoLoopStart renamable $lr
 
   bb.1.do.body (align 4):
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr
-    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, undef renamable $q1
-    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr
+    renamable $q1 = nnan ninf nsz MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg
+    renamable $q1 = nnan ninf nsz MVE_VMULf32 killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q1
+    MVE_VSTRWU32 killed renamable $q1, renamable $r1, 0, 1, killed renamable $vpr, $noreg
     renamable $r0, dead $cpsr = nuw tADDi8 killed renamable $r0, 16, 14, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r1, dead $cpsr = nuw tADDi8 killed renamable $r1, 16, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp-reordered.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp-reordered.mir
index d746b84956583..dbe0ac4a7ab78 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp-reordered.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp-reordered.mir
@@ -112,31 +112,31 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q3
+  ; CHECK:   renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q3
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   dead $lr = t2DLS renamable $r3
   ; CHECK:   $r4 = tMOVr killed $r3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   renamable $r3, dead $cpsr = tLSRri renamable $r2, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $q1, $q2, $q3, $r0, $r1, $r2, $r4
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 1, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv35, align 4)
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv12, align 4)
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv35, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv12, align 4)
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   $sp = frame-destroy VLDMDIA_UPD $sp, 14 /* CC::al */, $noreg, def $d8, def $d9
@@ -162,18 +162,18 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-    renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q2
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-    renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q3
+    renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q3
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     $lr = t2DoLoopStart renamable $r3
     $r4 = tMOVr killed $r3, 14 /* CC::al */, $noreg
     renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     renamable $r3, dead $cpsr = tLSRri renamable $r2, 1, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
 
   bb.2.vector.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
@@ -182,13 +182,13 @@ body:             |
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 0, $noreg
+    renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 0, $noreg, $noreg
     MVE_VPST 1, implicit $vpr
-    renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv35, align 4)
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv12, align 4)
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, undef renamable $q0
+    renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv35, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv12, align 4)
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp.mir
index d01777c4441d4..be8fd89e6cc8f 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-two-vcmp.mir
@@ -109,31 +109,31 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q3
+  ; CHECK:   renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q3
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   dead $lr = t2DLS renamable $r3
   ; CHECK:   $r4 = tMOVr killed $r3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   renamable $r3, dead $cpsr = tLSRri renamable $r2, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $q1, $q2, $q3, $r0, $r1, $r2, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 1, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 1, killed renamable $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv35, align 4)
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv12, align 4)
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv35, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv12, align 4)
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   $sp = frame-destroy VLDMDIA_UPD $sp, 14 /* CC::al */, $noreg, def $d8, def $d9
@@ -159,33 +159,33 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-    renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q2
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-    renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q3
+    renamable $q3 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q3
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     $lr = t2DoLoopStart renamable $r3
     $r4 = tMOVr killed $r3, 14 /* CC::al */, $noreg
     renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     renamable $r3, dead $cpsr = tLSRri renamable $r2, 1, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
 
   bb.2.vector.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $q1, $q2, $q3, $r0, $r1, $r2, $r4
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     MVE_VPST 1, implicit $vpr
-    renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr
-    renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv35, align 4)
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv12, align 4)
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, undef renamable $q0
+    renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q2, 2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q4 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv35, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q4, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv12, align 4)
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q3, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-vcmp.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-vcmp.mir
index f8622b1ac8627..ec396187b92c9 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-vcmp.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/iv-vcmp.mir
@@ -101,19 +101,19 @@ body:             |
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
-  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   renamable $r3, dead $cpsr = tLSRri renamable $r2, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $q2, $r0, $r1
   ; CHECK:   MVE_VPTv4u32 4, renamable $q1, renamable $q0, 8, implicit-def $vpr
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv35, align 4)
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q3, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv12, align 4)
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv35, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q3, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv12, align 4)
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -138,28 +138,28 @@ body:             |
     liveins: $r0, $r1, $r2, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-    renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q2
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     renamable $r3, dead $cpsr = tLSRri renamable $r2, 1, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $q1, $q2, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 1, killed renamable $vpr
-    renamable $r1, renamable $q3 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv35, align 4)
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q3, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv12, align 4)
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, undef renamable $q0
+    renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q0, 8, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q3 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv35, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q3, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv12, align 4)
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/livereg-no-loop-def.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/livereg-no-loop-def.mir
index f4633a3ee9cc6..6322ddf615b35 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/livereg-no-loop-def.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/livereg-no-loop-def.mir
@@ -93,21 +93,21 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r4, -8
   ; CHECK:   renamable $r12 = t2ADDri $sp, 8, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   tCBZ $r3, %bb.3
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $r0, $r1, $r2
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post renamable $q1, killed renamable $r2, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.store, align 4)
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post renamable $q1, killed renamable $r2, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.store, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $q0
@@ -123,7 +123,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r4, -8
     renamable $r12 = t2ADDri $sp, 8, 14, $noreg, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     tCBZ $r3, %bb.3
 
   bb.1.vector.ph:
@@ -133,7 +133,7 @@ body:             |
     renamable $r12 = t2ADDri renamable $r3, 3, 14, $noreg, $noreg
     renamable $lr = t2MOVi 1, 14, $noreg, $noreg
     renamable $r12 = t2BICri killed renamable $r12, 3, 14, $noreg, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     renamable $r4 = nuw nsw t2ADDrs killed renamable $lr, killed renamable $r12, 19, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $r4
@@ -143,18 +143,18 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $q1, $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q2 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r12, 14, $noreg
-    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
+    renamable $q2 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
     renamable $r12 = nsw t2SUBri killed $r12, 1, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRWU32_post renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.store, align 4)
+    renamable $r2 = MVE_VSTRWU32_post renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.store, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/lstp-insertion-position.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/lstp-insertion-position.mir
index 33ea6fd9d4686..c8d03fdb68a3e 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/lstp-insertion-position.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/lstp-insertion-position.mir
@@ -159,17 +159,17 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tLDRpci %const.0, 14 /* CC::al */, $noreg :: (load (s32) from constant-pool)
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $s4 = VMOVS killed $s0, 14 /* CC::al */, $noreg, implicit killed $q1, implicit-def $q1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q1, $r0, $r1, $r2
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv12, align 4)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1315, align 4)
-  ; CHECK:   renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv12, align 4)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1315, align 4)
+  ; CHECK:   renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q1
@@ -205,20 +205,20 @@ body:             |
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tLDRpci %const.0, 14 /* CC::al */, $noreg :: (load (s32) from constant-pool)
     $lr = t2DoLoopStart renamable $lr
-    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
     $s4 = VMOVS killed $s0, 14 /* CC::al */, $noreg, implicit killed $q1, implicit-def $q1
 
   bb.2.vector.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q1, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 2, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv12, align 4)
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1315, align 4)
-    renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv12, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1315, align 4)
+    renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
@@ -291,18 +291,18 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $lr = t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tLDRpci %const.0, 14 /* CC::al */, $noreg :: (load (s32) from constant-pool)
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r2, dead $cpsr = tLSRri killed renamable $r2, 2, 14 /* CC::al */, $noreg
   ; CHECK:   $s4 = VMOVS killed $s0, 14 /* CC::al */, $noreg, implicit killed $q1, implicit-def $q1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q1, $r0, $r1, $r2
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
-  ; CHECK:   renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q1
@@ -340,7 +340,7 @@ body:             |
     renamable $lr = t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tLDRpci %const.0, 14 /* CC::al */, $noreg :: (load (s32) from constant-pool)
     $lr = t2DoLoopStart renamable $lr
-    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
     renamable $r2, dead $cpsr = tLSRri killed renamable $r2, 2, 14 /* CC::al */, $noreg
     $s4 = VMOVS killed $s0, 14 /* CC::al */, $noreg, implicit killed $q1, implicit-def $q1
 
@@ -348,13 +348,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q1, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 2, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
-    renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $q1 = MVE_VFMAf32 killed renamable $q1, killed renamable $q2, killed renamable $q0, 1, killed renamable $vpr, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/matrix.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/matrix.mir
index df691b2a4357d..070a20734c2eb 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/matrix.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/matrix.mir
@@ -272,7 +272,7 @@ body:             |
   ; CHECK:   renamable $r3 = t2LSLri $r10, 1, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi3 killed renamable $r0, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r7, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r7, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r0 = nuw nsw t2ADDrs killed renamable $r0, renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tLSRri killed renamable $r1, 2, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r9 = t2SUBrs $r10, killed renamable $r1, 18, 14 /* CC::al */, $noreg, $noreg
@@ -280,7 +280,7 @@ body:             |
   ; CHECK:   successors: %bb.6(0x80000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r3, $r4, $r5, $r7, $r8, $r9, $r10, $r12
   ; CHECK:   renamable $r1 = t2LDRs renamable $r4, renamable $r7, 2, 14 /* CC::al */, $noreg :: (load (s32) from %ir.arrayidx12.us)
-  ; CHECK:   $q1 = MVE_VORR $q0, $q0, 0, $noreg, undef $q1
+  ; CHECK:   $q1 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q1
   ; CHECK:   $r2 = tMOVr killed $lr, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $q1 = MVE_VMOV_to_lane_32 killed renamable $q1, killed renamable $r1, 0, 14 /* CC::al */, $noreg
   ; CHECK:   $r6 = tMOVr $r5, 14 /* CC::al */, $noreg
@@ -290,23 +290,23 @@ body:             |
   ; CHECK: bb.6.vector.body:
   ; CHECK:   successors: %bb.6(0x7c000000), %bb.7(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r9, $r10, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
-  ; CHECK:   $q2 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q2
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   $q2 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q2
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r6, renamable $q1 = MVE_VLDRHS32_post killed renamable $r6, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1012, align 2)
-  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv46, align 2)
+  ; CHECK:   renamable $r6, renamable $q1 = MVE_VLDRHS32_post killed renamable $r6, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1012, align 2)
+  ; CHECK:   renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv46, align 2)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q2, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q2, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.6
   ; CHECK: bb.7.middle.block:
   ; CHECK:   successors: %bb.8(0x04000000), %bb.5(0x7c000000)
   ; CHECK:   liveins: $q0, $q1, $q2, $r0, $r3, $r4, $r5, $r7, $r8, $r9, $r10, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r9, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r9, 0, $noreg, $noreg
   ; CHECK:   renamable $r5 = tADDhirr killed renamable $r5, renamable $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VPSEL killed renamable $q1, killed renamable $q2, 0, killed renamable $vpr
+  ; CHECK:   renamable $q1 = MVE_VPSEL killed renamable $q1, killed renamable $q2, 0, killed renamable $vpr, $noreg
   ; CHECK:   $lr = tMOVr $r10, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   t2STRs killed renamable $r2, renamable $r4, renamable $r7, 2, 14 /* CC::al */, $noreg :: (store (s32) into %ir.27)
   ; CHECK:   renamable $r7, dead $cpsr = nuw nsw tADDi8 killed renamable $r7, 1, 14 /* CC::al */, $noreg
   ; CHECK:   tCMPhir renamable $r7, $r10, 14 /* CC::al */, $noreg, implicit-def $cpsr
@@ -428,7 +428,7 @@ body:             |
     renamable $r3 = t2LSLri $r10, 1, 14, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi3 killed renamable $r0, 4, 14, $noreg
     renamable $r0, dead $cpsr = tMOVi8 1, 14, $noreg
-    renamable $q0 = MVE_VDUP32 renamable $r7, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 renamable $r7, 0, $noreg, $noreg, undef renamable $q0
     renamable $r0 = nuw nsw t2ADDrs killed renamable $r0, renamable $r1, 19, 14, $noreg, $noreg
     renamable $r1, dead $cpsr = tLSRri killed renamable $r1, 2, 14, $noreg
     renamable $r9 = t2SUBrs $r10, killed renamable $r1, 18, 14, $noreg, $noreg
@@ -438,7 +438,7 @@ body:             |
     liveins: $lr, $q0, $r0, $r3, $r4, $r5, $r7, $r8, $r9, $r10, $r12
 
     renamable $r1 = t2LDRs renamable $r4, renamable $r7, 2, 14, $noreg :: (load (s32) from %ir.arrayidx12.us)
-    $q1 = MVE_VORR $q0, $q0, 0, $noreg, undef $q1
+    $q1 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q1
     $r2 = tMOVr killed $lr, 14, $noreg
     renamable $q1 = MVE_VMOV_to_lane_32 killed renamable $q1, killed renamable $r1, 0, 14, $noreg
     $r6 = tMOVr $r5, 14, $noreg
@@ -450,15 +450,15 @@ body:             |
     successors: %bb.6(0x7c000000), %bb.7(0x04000000)
     liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r9, $r10, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
-    $q2 = MVE_VORR killed $q1, $q1, 0, $noreg, undef $q2
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
+    $q2 = MVE_VORR killed $q1, $q1, 0, $noreg, $noreg, undef $q2
     MVE_VPST 4, implicit $vpr
-    renamable $r6, renamable $q1 = MVE_VLDRHS32_post killed renamable $r6, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1012, align 2)
-    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv46, align 2)
+    renamable $r6, renamable $q1 = MVE_VLDRHS32_post killed renamable $r6, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1012, align 2)
+    renamable $r1, renamable $q3 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv46, align 2)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q3, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q2, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q2, 0, $noreg, $noreg, undef renamable $q1
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.7, 14, $noreg
 
@@ -466,11 +466,11 @@ body:             |
     successors: %bb.8(0x04000000), %bb.5(0x7c000000)
     liveins: $q0, $q1, $q2, $r0, $r3, $r4, $r5, $r7, $r8, $r9, $r10, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r9, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r9, 0, $noreg, $noreg
     renamable $r5 = tADDhirr killed renamable $r5, renamable $r3, 14, $noreg
-    renamable $q1 = MVE_VPSEL killed renamable $q1, killed renamable $q2, 0, killed renamable $vpr
+    renamable $q1 = MVE_VPSEL killed renamable $q1, killed renamable $q2, 0, killed renamable $vpr, $noreg
     $lr = tMOVr $r10, 14, $noreg
-    renamable $r2 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg
+    renamable $r2 = MVE_VADDVu32no_acc killed renamable $q1, 0, $noreg, $noreg
     t2STRs killed renamable $r2, renamable $r4, renamable $r7, 2, 14, $noreg :: (store (s32) into %ir.27)
     renamable $r7, dead $cpsr = nuw nsw tADDi8 killed renamable $r7, 1, 14, $noreg
     tCMPhir renamable $r7, $r10, 14, $noreg, implicit-def $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-after-dlstp.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-after-dlstp.mir
index 4504ecc0c14b9..af76970f18da8 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-after-dlstp.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-after-dlstp.mir
@@ -154,7 +154,7 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_offset 8
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r4, -8
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $r3 = tMOVr $r1, 14 /* CC::al */, $noreg
   ; CHECK:   $r12 = tMOVr $r0, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r3
@@ -162,8 +162,8 @@ body:             |
   ; CHECK: bb.1.do.body.i:
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r4, $r12
-  ; CHECK:   renamable $r12, renamable $q1 = MVE_VLDRWU32_post killed renamable $r12, 16, 0, $noreg :: (load (s128) from %ir.pSrc.addr.0.i2, align 4)
-  ; CHECK:   renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VADDf32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r12, renamable $q1 = MVE_VLDRWU32_post killed renamable $r12, 16, 0, $noreg, $noreg :: (load (s128) from %ir.pSrc.addr.0.i2, align 4)
+  ; CHECK:   renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VADDf32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2.arm_mean_f32_mve.exit:
   ; CHECK:   successors: %bb.3(0x80000000)
@@ -175,18 +175,18 @@ body:             |
   ; CHECK:   renamable $s4 = VUITOS killed renamable $s4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $s0 = nnan ninf nsz arcp contract afn reassoc VDIVS killed renamable $s0, killed renamable $s4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = VMOVRS killed renamable $s0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $r3 = tMOVr $r1, 14 /* CC::al */, $noreg
   ; CHECK: bb.3.do.body:
   ; CHECK:   successors: %bb.3(0x7c000000), %bb.4(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.pSrc.addr.01, align 4)
-  ; CHECK:   renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VSUBf32 killed renamable $q2, renamable $q1, 1, renamable $vpr, undef renamable $q2
-  ; CHECK:   renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q0, killed renamable $q2, killed renamable $q2, 1, killed renamable $vpr
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pSrc.addr.01, align 4)
+  ; CHECK:   renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VSUBf32 killed renamable $q2, renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q0, killed renamable $q2, killed renamable $q2, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.3
   ; CHECK: bb.4.do.end:
   ; CHECK:   liveins: $q0, $r1, $r2
@@ -211,7 +211,7 @@ body:             |
     renamable $r3 = tMOVi8 $noreg, 4, 10 /* CC::ge */, killed $cpsr, implicit killed renamable $r3, implicit killed $itstate
     renamable $r12 = t2MOVi 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBrr renamable $r1, killed renamable $r3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 3, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r12, killed renamable $r3, 19, 14 /* CC::al */, $noreg, $noreg
     $r3 = tMOVr $r1, 14 /* CC::al */, $noreg
@@ -223,12 +223,12 @@ body:             |
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 4, implicit $vpr
-    renamable $r12, renamable $q1 = MVE_VLDRWU32_post killed renamable $r12, 16, 1, renamable $vpr :: (load (s128) from %ir.pSrc.addr.0.i2, align 4)
-    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VADDf32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, renamable $q0
+    renamable $r12, renamable $q1 = MVE_VLDRWU32_post killed renamable $r12, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pSrc.addr.0.i2, align 4)
+    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VADDf32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, renamable $q0
     t2LoopEnd renamable $lr, %bb.1, implicit-def dead $cpsr
     tB %bb.2, 14 /* CC::al */, $noreg
 
@@ -243,21 +243,21 @@ body:             |
     renamable $s4 = VUITOS killed renamable $s4, 14 /* CC::al */, $noreg
     renamable $s0 = nnan ninf nsz arcp contract afn reassoc VDIVS killed renamable $s0, killed renamable $s4, 14 /* CC::al */, $noreg
     renamable $r3 = VMOVRS killed renamable $s0, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
-    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VDUP32 killed renamable $r3, 0, $noreg, $noreg, undef renamable $q1
     $r3 = tMOVr $r1, 14 /* CC::al */, $noreg
 
   bb.3.do.body:
     successors: %bb.3(0x7c000000), %bb.4(0x04000000)
     liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 2, implicit $vpr
-    renamable $r0, renamable $q2 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.pSrc.addr.01, align 4)
-    renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VSUBf32 killed renamable $q2, renamable $q1, 1, renamable $vpr, undef renamable $q2
-    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q0, killed renamable $q2, renamable $q2, 1, killed renamable $vpr
+    renamable $r0, renamable $q2 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.pSrc.addr.01, align 4)
+    renamable $q2 = nnan ninf nsz arcp contract afn reassoc MVE_VSUBf32 killed renamable $q2, renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VFMAf32 killed renamable $q0, killed renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg
     t2LoopEnd renamable $lr, %bb.3, implicit-def dead $cpsr
     tB %bb.4, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-lr-terminator.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-lr-terminator.mir
index 11d71103fec38..6f7a8cde23928 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-lr-terminator.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/mov-lr-terminator.mir
@@ -119,12 +119,12 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
   ; CHECK:   renamable $r4, dead $cpsr = tADDrr renamable $r1, renamable $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 0, $noreg :: (load (s32) from %ir.scevgep45, align 1)
+  ; CHECK:   renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 0, $noreg, $noreg :: (load (s32) from %ir.scevgep45, align 1)
   ; CHECK:   renamable $r4, dead $cpsr = tADDrr renamable $r2, renamable $r3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 0, $noreg :: (load (s32) from %ir.scevgep23, align 1)
-  ; CHECK:   renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 0, $noreg, $noreg :: (load (s32) from %ir.scevgep23, align 1)
+  ; CHECK:   renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -161,17 +161,17 @@ body:             |
     liveins: $lr, $r0, $r1, $r2, $r3, $r12
 
     renamable $r4, dead $cpsr = tADDrr renamable $r1, renamable $r3, 14, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep45, align 1)
+    renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep45, align 1)
     renamable $r4, dead $cpsr = tADDrr renamable $r2, renamable $r3, 14, $noreg
     renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 4, 14, $noreg
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep23, align 1)
-    renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep23, align 1)
+    renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-def-before-start.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-def-before-start.mir
index af8412ab2fbd2..08353c8f92f18 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-def-before-start.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-def-before-start.mir
@@ -130,17 +130,17 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3, $r12
   ; CHECK:   renamable $r4, dead $cpsr = tADDrr renamable $r1, renamable $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep45, align 1)
+  ; CHECK:   renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep45, align 1)
   ; CHECK:   renamable $r4, dead $cpsr = tADDrr renamable $r2, renamable $r3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep23, align 1)
-  ; CHECK:   renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep23, align 1)
+  ; CHECK:   renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -177,17 +177,17 @@ body:             |
     liveins: $lr, $r0, $r1, $r2, $r3, $r12
 
     renamable $r4, dead $cpsr = tADDrr renamable $r1, renamable $r3, 14, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep45, align 1)
+    renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep45, align 1)
     renamable $r4, dead $cpsr = tADDrr renamable $r2, renamable $r3, 14, $noreg
     renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 4, 14, $noreg
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep23, align 1)
-    renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep23, align 1)
+    renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-start-after-def.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-start-after-def.mir
index df79e5b9045d6..26b887906ea3e 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-start-after-def.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/move-start-after-def.mir
@@ -130,17 +130,17 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3, $r12
   ; CHECK:   renamable $r4, dead $cpsr = tADDrr renamable $r1, renamable $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep45, align 1)
+  ; CHECK:   renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep45, align 1)
   ; CHECK:   renamable $r4, dead $cpsr = tADDrr renamable $r2, renamable $r3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r12, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep23, align 1)
-  ; CHECK:   renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep23, align 1)
+  ; CHECK:   renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -177,17 +177,17 @@ body:             |
     liveins: $lr, $r0, $r1, $r2, $r3, $r12
 
     renamable $r4, dead $cpsr = tADDrr renamable $r1, renamable $r3, 14, $noreg
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep45, align 1)
+    renamable $q0 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep45, align 1)
     renamable $r4, dead $cpsr = tADDrr renamable $r2, renamable $r3, 14, $noreg
     renamable $r3, dead $cpsr = tADDi8 killed renamable $r3, 4, 14, $noreg
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr :: (load (s32) from %ir.scevgep23, align 1)
-    renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s32) from %ir.scevgep23, align 1)
+    renamable $q0 = nuw nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-block-cond-iter-count.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-block-cond-iter-count.mir
index 5b382eaa525f2..f17496c3d1653 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-block-cond-iter-count.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-block-cond-iter-count.mir
@@ -233,10 +233,10 @@ body:             |
   ; CHECK: bb.3 (%ir-block.33):
   ; CHECK:   successors: %bb.3(0x7c000000), %bb.4(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg, $noreg
   ; CHECK:   $r0 = tMOVr $r2, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.3
   ; CHECK: bb.4 (%ir-block.64):
@@ -358,14 +358,14 @@ body:             |
     successors: %bb.3(0x7c000000), %bb.4(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r2, 14, $noreg
     t2LoopEnd renamable $lr, %bb.3, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-cond-iter-count.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-cond-iter-count.mir
index 056aae9831ba6..5ce9a63025d04 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-cond-iter-count.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multi-cond-iter-count.mir
@@ -104,10 +104,10 @@ body:             |
   ; CHECK: bb.2 (%ir-block.18):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r3
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 0, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg, $noreg
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3 (%ir-block.34):
@@ -149,14 +149,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multiple-do-loops.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multiple-do-loops.mir
index 4864bcfe1d133..6e8ad0877f1c2 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multiple-do-loops.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/multiple-do-loops.mir
@@ -372,10 +372,10 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3, $r5, $r6, $r8
-  ; CHECK:   renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv6264, align 4)
-  ; CHECK:   renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv6567, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv6870, align 4)
+  ; CHECK:   renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv6264, align 4)
+  ; CHECK:   renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv6567, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv6870, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond4.preheader:
   ; CHECK:   successors: %bb.6(0x30000000), %bb.4(0x50000000)
@@ -389,12 +389,12 @@ body:             |
   ; CHECK: bb.5.vector.body38:
   ; CHECK:   successors: %bb.5(0x7c000000), %bb.6(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r12
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv55, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv5658, align 4)
-  ; CHECK:   renamable $r12, renamable $q2 = MVE_VLDRWU32_post killed renamable $r12, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv5961, align 4)
-  ; CHECK:   renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg :: (store (s128) into %ir.lsr.iv5961, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv55, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv5658, align 4)
+  ; CHECK:   renamable $r12, renamable $q2 = MVE_VLDRWU32_post killed renamable $r12, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv5961, align 4)
+  ; CHECK:   renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv5961, align 4)
   ; CHECK:   $r0 = tMOVr $r12, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.5
   ; CHECK: bb.6.for.cond.cleanup6:
@@ -437,14 +437,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r8, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv6264, align 4)
-    renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv6567, align 4)
+    renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv6264, align 4)
+    renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv6567, align 4)
     renamable $r4, dead $cpsr = tSUBi8 killed renamable $r4, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv6870, align 4)
+    renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv6870, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -468,16 +468,16 @@ body:             |
     successors: %bb.5(0x7c000000), %bb.6(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv55, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv5658, align 4)
-    renamable $r12, renamable $q2 = MVE_VLDRWU32_post killed renamable $r12, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv5961, align 4)
-    renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv55, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv5658, align 4)
+    renamable $r12, renamable $q2 = MVE_VLDRWU32_post killed renamable $r12, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv5961, align 4)
+    renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv5961, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv5961, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r12, 14, $noreg
     t2LoopEnd renamable $lr, %bb.5, implicit-def dead $cpsr
@@ -576,10 +576,10 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3, $r5, $r6, $r8
-  ; CHECK:   renamable $r5, renamable $q0 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv6264, align 4)
-  ; CHECK:   renamable $r6, renamable $q1 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv6567, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv6870, align 4)
+  ; CHECK:   renamable $r5, renamable $q0 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv6264, align 4)
+  ; CHECK:   renamable $r6, renamable $q1 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv6567, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv6870, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond4.preheader:
   ; CHECK:   successors: %bb.6(0x30000000), %bb.4(0x50000000)
@@ -593,12 +593,12 @@ body:             |
   ; CHECK: bb.5.vector.body38:
   ; CHECK:   successors: %bb.5(0x7c000000), %bb.6(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r4
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv55, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv5658, align 4)
-  ; CHECK:   renamable $r4, renamable $q2 = MVE_VLDRWU32_post killed renamable $r4, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv5961, align 4)
-  ; CHECK:   renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg :: (store (s128) into %ir.lsr.iv5961, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv55, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv5658, align 4)
+  ; CHECK:   renamable $r4, renamable $q2 = MVE_VLDRWU32_post killed renamable $r4, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv5961, align 4)
+  ; CHECK:   renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv5961, align 4)
   ; CHECK:   $r0 = tMOVr $r4, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.5
   ; CHECK: bb.6.for.cond.cleanup6:
@@ -643,14 +643,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r8, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r5, renamable $q0 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv6264, align 4)
-    renamable $r6, renamable $q1 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv6567, align 4)
+    renamable $r5, renamable $q0 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv6264, align 4)
+    renamable $r6, renamable $q1 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv6567, align 4)
     renamable $r4, dead $cpsr = tSUBi8 killed renamable $r4, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv6870, align 4)
+    renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv6870, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -676,16 +676,16 @@ body:             |
     successors: %bb.5(0x7c000000), %bb.6(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r4
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv55, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv5658, align 4)
-    renamable $r4, renamable $q2 = MVE_VLDRWU32_post killed renamable $r4, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv5961, align 4)
-    renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv55, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv5658, align 4)
+    renamable $r4, renamable $q2 = MVE_VLDRWU32_post killed renamable $r4, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv5961, align 4)
+    renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv5961, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv5961, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r4, 14, $noreg
     t2LoopEnd renamable $lr, %bb.5, implicit-def dead $cpsr
@@ -791,10 +791,10 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3, $r5, $r6, $r8
-  ; CHECK:   renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv117119, align 4)
-  ; CHECK:   renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv120122, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv123125, align 4)
+  ; CHECK:   renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv117119, align 4)
+  ; CHECK:   renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv120122, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv123125, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond4.preheader:
   ; CHECK:   successors: %bb.6(0x30000000), %bb.4(0x50000000)
@@ -814,12 +814,12 @@ body:             |
   ; CHECK: bb.5.vector.body65:
   ; CHECK:   successors: %bb.5(0x7c000000), %bb.6(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3, $r4, $r6, $r9, $r10
-  ; CHECK:   renamable $r4, renamable $q0 = MVE_VLDRWU32_post killed renamable $r4, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv108110, align 4)
-  ; CHECK:   renamable $r9, renamable $q1 = MVE_VLDRWU32_post killed renamable $r9, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv111113, align 4)
-  ; CHECK:   renamable $r6, renamable $q2 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv114116, align 4)
-  ; CHECK:   renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r10, 0, 0, killed $noreg :: (store (s128) into %ir.lsr.iv114116, align 4)
+  ; CHECK:   renamable $r4, renamable $q0 = MVE_VLDRWU32_post killed renamable $r4, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv108110, align 4)
+  ; CHECK:   renamable $r9, renamable $q1 = MVE_VLDRWU32_post killed renamable $r9, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv111113, align 4)
+  ; CHECK:   renamable $r6, renamable $q2 = MVE_VLDRWU32_post killed renamable $r6, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv114116, align 4)
+  ; CHECK:   renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r10, 0, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv114116, align 4)
   ; CHECK:   $r10 = tMOVr $r6, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.5
   ; CHECK: bb.6.for.cond15.preheader:
@@ -834,12 +834,12 @@ body:             |
   ; CHECK: bb.8.vector.body84:
   ; CHECK:   successors: %bb.8(0x7c000000), %bb.9(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r5
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv101, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv102104, align 4)
-  ; CHECK:   renamable $r5, renamable $q2 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv105107, align 4)
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = MVE_VSUBi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg :: (store (s128) into %ir.lsr.iv105107, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv101, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv102104, align 4)
+  ; CHECK:   renamable $r5, renamable $q2 = MVE_VLDRWU32_post killed renamable $r5, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv105107, align 4)
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VSUBi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv105107, align 4)
   ; CHECK:   $r0 = tMOVr $r5, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.8
   ; CHECK: bb.9.for.cond.cleanup17:
@@ -884,14 +884,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r8, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv117119, align 4)
-    renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv120122, align 4)
+    renamable $r6, renamable $q0 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv117119, align 4)
+    renamable $r5, renamable $q1 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv120122, align 4)
     renamable $r4, dead $cpsr = tSUBi8 killed renamable $r4, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv123125, align 4)
+    renamable $r8 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r8, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv123125, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -925,16 +925,16 @@ body:             |
     successors: %bb.5(0x7c000000), %bb.6(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r8, $r9, $r10, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r5, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r5, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r4, renamable $q0 = MVE_VLDRWU32_post killed renamable $r4, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv108110, align 4)
-    renamable $r9, renamable $q1 = MVE_VLDRWU32_post killed renamable $r9, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv111113, align 4)
-    renamable $r6, renamable $q2 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv114116, align 4)
-    renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r4, renamable $q0 = MVE_VLDRWU32_post killed renamable $r4, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv108110, align 4)
+    renamable $r9, renamable $q1 = MVE_VLDRWU32_post killed renamable $r9, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv111113, align 4)
+    renamable $r6, renamable $q2 = MVE_VLDRWU32_post killed renamable $r6, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv114116, align 4)
+    renamable $q0 = MVE_VEOR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r5, dead $cpsr = tSUBi8 killed renamable $r5, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r10, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv114116, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r10, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv114116, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r10 = tMOVr $r6, 14, $noreg
     t2LoopEnd renamable $lr, %bb.5, implicit-def dead $cpsr
@@ -958,16 +958,16 @@ body:             |
     successors: %bb.8(0x7c000000), %bb.9(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3, $r5
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv101, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv102104, align 4)
-    renamable $r5, renamable $q2 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv105107, align 4)
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv101, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv102104, align 4)
+    renamable $r5, renamable $q2 = MVE_VLDRWU32_post killed renamable $r5, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv105107, align 4)
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = MVE_VSUBi32 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VSUBi32 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv105107, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv105107, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r5, 14, $noreg
     t2LoopEnd renamable $lr, %bb.8, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/no-vpsel-liveout.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/no-vpsel-liveout.mir
index ef2e85bdefec1..7d898653caab0 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/no-vpsel-liveout.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/no-vpsel-liveout.mir
@@ -115,19 +115,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_offset 8
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, killed $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, killed $noreg, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x80000000)
@@ -147,7 +147,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r7, -8
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14, $noreg
@@ -159,15 +159,15 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r3, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -175,7 +175,7 @@ body:             |
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-load.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-load.mir
index 2c363760b0600..e0a483047744f 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-load.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-load.mir
@@ -124,7 +124,7 @@ body:             |
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
   ; CHECK:   renamable $r3 = t2ADDri renamable $r2, 15, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 15, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 16, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -134,20 +134,20 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
-  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, undef $q1
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, $noreg, undef $q1
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv2022, align 1)
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, $noreg :: (load (s128) from %ir.lsr.iv19, align 1)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv2022, align 1)
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, $noreg, $noreg :: (load (s128) from %ir.lsr.iv19, align 1)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 16, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q2 = MVE_VADDi8 killed renamable $q2, renamable $q1, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q0 = MVE_VADDi8 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q2 = MVE_VADDi8 killed renamable $q2, renamable $q1, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q0 = MVE_VADDi8 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0, $q1, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP8 killed renamable $r3, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VADDVu8no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 killed renamable $r3, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu8no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $sp = t2LDMIA_UPD $sp, 14 /* CC::al */, $noreg, def $r7, def $lr
   ; CHECK:   renamable $r0 = tUXTB killed renamable $r0, 14 /* CC::al */, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit killed $r0
@@ -172,7 +172,7 @@ body:             |
     $r7 = frame-setup tMOVr $sp, 14, $noreg
     frame-setup CFI_INSTRUCTION def_cfa_register $r7
     renamable $r3 = t2ADDri renamable $r2, 15, 14, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 15, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 16, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14, $noreg
@@ -185,24 +185,24 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
-    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, undef $q1
+    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
+    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, $noreg, undef $q1
     MVE_VPST 2, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv2022, align 1)
-    renamable $r0, renamable $q2 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, $noreg :: (load (s128) from %ir.lsr.iv19, align 1)
+    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv2022, align 1)
+    renamable $r0, renamable $q2 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, $noreg, $noreg :: (load (s128) from %ir.lsr.iv19, align 1)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 16, 14, $noreg
-    renamable $q2 = MVE_VADDi8 killed renamable $q2, renamable $q1, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VADDi8 killed renamable $q2, renamable $q1, 0, $noreg, $noreg, undef renamable $q2
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VADDi8 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi8 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
 
   bb.3.middle.block:
     liveins: $q0, $q1, $r3
 
-    renamable $vpr = MVE_VCTP8 killed renamable $r3, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
-    renamable $r0 = MVE_VADDVu8no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCTP8 killed renamable $r3, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VADDVu8no_acc killed renamable $q0, 0, $noreg, $noreg
     $sp = t2LDMIA_UPD $sp, 14, $noreg, def $r7, def $lr
     renamable $r0 = tUXTB killed renamable $r0, 14, $noreg
     tBX_RET 14, $noreg, implicit killed $r0

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-store.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-store.mir
index 06e700ba45b3f..3ee066a9ffe2e 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-store.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/non-masked-store.mir
@@ -121,13 +121,13 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 16, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv15, align 1)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv1618, align 1)
-  ; CHECK:   renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, $noreg :: (store (s128) into %ir.lsr.iv1921, align 1)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv15, align 1)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1618, align 1)
+  ; CHECK:   renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, $noreg, $noreg :: (store (s128) into %ir.lsr.iv1921, align 1)
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -160,14 +160,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 16, 14, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv15, align 1)
-    renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1618, align 1)
+    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv15, align 1)
+    renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1618, align 1)
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-    renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, $noreg :: (store (s128) into %ir.lsr.iv1921, align 1)
+    renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+    renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, $noreg, $noreg :: (store (s128) into %ir.lsr.iv1921, align 1)
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-invariant.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-invariant.mir
index 86634f2cf750c..31b7ee2c4dad1 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-invariant.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-invariant.mir
@@ -85,18 +85,18 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r2
   ; CHECK:   renamable $r1 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.4(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, killed $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, killed $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK:   tB %bb.4, 14 /* CC::al */, $noreg
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.4(0x80000000)
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK: bb.4.exit:
   ; CHECK:   liveins: $q1
   ; CHECK:   renamable $r0, renamable $r1 = VMOVRRD renamable $d2, 14 /* CC::al */, $noreg
@@ -122,7 +122,7 @@ body:             |
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r1, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r1 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r3
     $r1 = tMOVr killed $r3, 14 /* CC::al */, $noreg
 
@@ -130,21 +130,21 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.4(0x04000000)
     liveins: $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     $lr = tMOVr $r1, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = nsw tSUBi8 killed $r1, 1, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 1, killed renamable $vpr, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, undef renamable $q1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.4, 14 /* CC::al */, $noreg
 
   bb.3:
     successors: %bb.4(0x80000000)
 
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
 
   bb.4.exit:
     liveins: $q1

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-liveout.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-liveout.mir
index cb3cf0668350c..13ba3594a22bf 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-liveout.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/predicated-liveout.mir
@@ -88,20 +88,20 @@ body:             |
   ; CHECK: bb.1.for.body.preheader:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $lr, $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 0, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.for.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r3
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRBU16_post killed renamable $r1, 8, 0, $noreg :: (load (s64) from %ir.input_2_cast, align 1)
-  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRBU16_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.input_1_cast, align 1)
-  ; CHECK:   renamable $q1 = MVE_VADDi16 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi16 killed renamable $q1, killed renamable $q0, 0, killed $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRBU16_post killed renamable $r1, 8, 0, $noreg, $noreg :: (load (s64) from %ir.input_2_cast, align 1)
+  ; CHECK:   renamable $r0, renamable $q2 = MVE_VLDRBU16_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.input_1_cast, align 1)
+  ; CHECK:   renamable $q1 = MVE_VADDi16 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi16 killed renamable $q1, killed renamable $q0, 0, killed $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   ; CHECK: bb.4.for.cond.cleanup:
   ; CHECK:   liveins: $lr
@@ -124,29 +124,29 @@ body:             |
     successors: %bb.2(0x80000000)
     liveins: $r0, $r1, $r2, $lr
 
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = tMOVi8 0, 14, $noreg
 
   bb.2.for.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r3, $lr
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed $r3, 1, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     MVE_VPST 1, implicit $vpr
-    renamable $r1, renamable $q1 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.input_2_cast, align 1)
-    renamable $r0, renamable $q2 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.input_1_cast, align 1)
-    renamable $q1 = MVE_VADDi16 killed renamable $q2, killed renamable $q1, 1, renamable $vpr, undef renamable $q1
-    renamable $q0 = MVE_VADDi16 killed renamable $q1, killed renamable $q0, 1, killed renamable $vpr, undef renamable $q0
+    renamable $r1, renamable $q1 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.input_2_cast, align 1)
+    renamable $r0, renamable $q2 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.input_1_cast, align 1)
+    renamable $q1 = MVE_VADDi16 killed renamable $q2, killed renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VADDi16 killed renamable $q1, killed renamable $q0, 1, killed renamable $vpr, $noreg, undef renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r7, def $pc, implicit killed $r0
 
   bb.4.for.cond.cleanup:

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/reductions-vpt-liveout.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/reductions-vpt-liveout.mir
index 80875fca9d738..4ac6c60764e12 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/reductions-vpt-liveout.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/reductions-vpt-liveout.mir
@@ -333,19 +333,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 0, $noreg :: (load (s32) from %ir.lsr.iv13, align 1)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 0, $noreg :: (load (s32) from %ir.lsr.iv1416, align 1)
-  ; CHECK:   renamable $q1 = nuw nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 0, $noreg, $noreg :: (load (s32) from %ir.lsr.iv13, align 1)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 0, $noreg, $noreg :: (load (s32) from %ir.lsr.iv1416, align 1)
+  ; CHECK:   renamable $q1 = nuw nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x50000000)
@@ -371,29 +371,29 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 1, renamable $vpr :: (load (s32) from %ir.lsr.iv13, align 1)
-    renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 1, renamable $vpr :: (load (s32) from %ir.lsr.iv1416, align 1)
+    renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 1, renamable $vpr, $noreg :: (load (s32) from %ir.lsr.iv13, align 1)
+    renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 1, renamable $vpr, $noreg :: (load (s32) from %ir.lsr.iv1416, align 1)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = nuw nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nuw nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...
@@ -439,19 +439,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 0, $noreg :: (load (s32) from %ir.lsr.iv14, align 1)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 0, $noreg :: (load (s32) from %ir.lsr.iv1517, align 1)
-  ; CHECK:   renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 0, $noreg, $noreg :: (load (s32) from %ir.lsr.iv14, align 1)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 0, $noreg, $noreg :: (load (s32) from %ir.lsr.iv1517, align 1)
+  ; CHECK:   renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x50000000)
@@ -477,29 +477,29 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 1, renamable $vpr :: (load (s32) from %ir.lsr.iv14, align 1)
-    renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 1, renamable $vpr :: (load (s32) from %ir.lsr.iv1517, align 1)
+    renamable $r0, renamable $q1 = MVE_VLDRBU32_post killed renamable $r0, 4, 1, renamable $vpr, $noreg :: (load (s32) from %ir.lsr.iv14, align 1)
+    renamable $r1, renamable $q2 = MVE_VLDRBU32_post killed renamable $r1, 4, 1, renamable $vpr, $noreg :: (load (s32) from %ir.lsr.iv1517, align 1)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...
@@ -546,19 +546,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv13, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv1416, align 2)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv13, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv1416, align 2)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x50000000)
@@ -584,29 +584,29 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv13, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1416, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv13, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1416, align 2)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...
@@ -652,19 +652,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv14, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, $noreg :: (load (s64) from %ir.lsr.iv1517, align 2)
-  ; CHECK:   renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv14, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 0, $noreg, $noreg :: (load (s64) from %ir.lsr.iv1517, align 2)
+  ; CHECK:   renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x50000000)
@@ -690,29 +690,29 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv14, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv1517, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv14, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1517, align 2)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 renamable $q0, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...
@@ -758,19 +758,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv12, align 4)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv1315, align 4)
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv12, align 4)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv1315, align 4)
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x50000000)
@@ -796,29 +796,29 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv12, align 4)
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1315, align 4)
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv12, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1315, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...
@@ -864,19 +864,19 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   dead $r7 = frame-setup tMOVr $sp, 14 /* CC::al */, $noreg
   ; CHECK:   frame-setup CFI_INSTRUCTION def_cfa_register $r7
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 0, killed $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 0, killed $noreg, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x50000000)
@@ -902,29 +902,29 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     $lr = t2DoLoopStart renamable $lr
 
   bb.2.vector.body (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
   bb.3.middle.block:
     liveins: $q0
 
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/remove-elem-moves.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/remove-elem-moves.mir
index bf94eabeb7511..ebf57e704fa27 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/remove-elem-moves.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/remove-elem-moves.mir
@@ -183,10 +183,10 @@ body:             |
   ; CHECK: bb.4.vector.body:
   ; CHECK:   successors: %bb.4(0x7c000000), %bb.5(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r4, $r5, $r7, $r12
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_pre killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.scevgep18, align 4)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_pre killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.scevgep18, align 4)
   ; CHECK:   $lr = tMOVr killed $r5, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VABSf32 killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r1 = MVE_VSTRBU8_pre killed renamable $q0, killed renamable $r1, 16, 0, $noreg :: (store (s128) into %ir.scevgep13, align 4)
+  ; CHECK:   renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VABSf32 killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1 = MVE_VSTRBU8_pre killed renamable $q0, killed renamable $r1, 16, 0, $noreg, $noreg :: (store (s128) into %ir.scevgep13, align 4)
   ; CHECK:   renamable $lr = t2SUBri killed renamable $lr, 1, 14 /* CC::al */, $noreg, def $cpsr
   ; CHECK:   $r5 = tMOVr killed $lr, 14 /* CC::al */, $noreg
   ; CHECK:   tBcc %bb.4, 1 /* CC::ne */, killed $cpsr
@@ -272,10 +272,10 @@ body:             |
     successors: %bb.4(0x7c000000), %bb.5(0x04000000)
     liveins: $r0, $r1, $r2, $r3, $r4, $r5, $r7, $r12
 
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_pre killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.scevgep18, align 4)
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_pre killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.scevgep18, align 4)
     $lr = tMOVr killed $r5, 14, $noreg
-    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VABSf32 killed renamable $q0, 0, $noreg, undef renamable $q0
-    renamable $r1 = MVE_VSTRBU8_pre killed renamable $q0, killed renamable $r1, 16, 0, $noreg :: (store (s128) into %ir.scevgep13, align 4)
+    renamable $q0 = nnan ninf nsz arcp contract afn reassoc MVE_VABSf32 killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+    renamable $r1 = MVE_VSTRBU8_pre killed renamable $q0, killed renamable $r1, 16, 0, $noreg, $noreg :: (store (s128) into %ir.scevgep13, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r5 = tMOVr $lr, 14, $noreg
     t2LoopEnd killed renamable $lr, %bb.4, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/safe-retaining.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/safe-retaining.mir
index acd867b18aedf..f16e4b8bb8027 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/safe-retaining.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/safe-retaining.mir
@@ -136,10 +136,10 @@ body:             |
   ; CHECK:   liveins: $r0, $r1, $r2, $r12
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.addr.b, align 4)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg :: (load (s128) from %ir.addr.a, align 4)
-  ; CHECK:   renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg
-  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 0, killed $noreg :: (store (s128) into %ir.addr.c, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.addr.b, align 4)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg :: (load (s128) from %ir.addr.a, align 4)
+  ; CHECK:   renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg, $noreg
+  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.addr.c, align 4)
   ; CHECK:   dead $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -167,17 +167,17 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     $lr = tMOVr $r12, 14 /* CC::al */, $noreg
     renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg
+    renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+    renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
@@ -236,11 +236,11 @@ body:             |
   ; CHECK:   liveins: $r0, $r1, $r2, $r4
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.addr.b, align 2)
-  ; CHECK:   renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 0, $noreg :: (load (s128) from %ir.addr.a, align 2)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.addr.b, align 2)
+  ; CHECK:   renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 0, $noreg, $noreg :: (load (s128) from %ir.addr.a, align 2)
   ; CHECK:   $r0 = tMOVr $r1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg
-  ; CHECK:   renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 0, killed $noreg :: (store (s128) into %ir.addr.c, align 2)
+  ; CHECK:   renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg, $noreg
+  ; CHECK:   renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.addr.c, align 2)
   ; CHECK:   dead $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -268,18 +268,18 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3, $r4
 
-    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
     renamable $r4, dead $cpsr = tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 2)
-    renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 2)
+    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 2)
+    renamable $q1 = MVE_VLDRHU16 killed renamable $r0, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 2)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r1, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg
+    renamable $q1 = MVE_VQSHRUNs16th killed renamable $q1, killed renamable $q0, 1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 2)
+    renamable $r2 = MVE_VSTRHU16_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 2)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-debug.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-debug.mir
index b448afb9c77e9..5966df967bb64 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-debug.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-debug.mir
@@ -195,7 +195,7 @@ body:             |
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg, debug-location !28
   ; CHECK:   renamable $r4, dead $cpsr = tMOVi8 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg, debug-location !28
-  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r4, 0, $noreg, undef renamable $q0, debug-location !28
+  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r4, 0, $noreg, $noreg, undef renamable $q0, debug-location !28
   ; CHECK:   renamable $q0 = MVE_VMOV_to_lane_32 killed renamable $q0, killed renamable $r12, 0, 14 /* CC::al */, $noreg, debug-location !28
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $lr, renamable $r3, 19, 14 /* CC::al */, $noreg, $noreg, debug-location !28
   ; CHECK:   renamable $r3, dead $cpsr = tLSRri killed renamable $r3, 2, 14 /* CC::al */, $noreg, debug-location !28
@@ -203,21 +203,21 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, debug-location !30
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg, debug-location !30
   ; CHECK:   DBG_VALUE $vpr, $noreg, !17, !DIExpression(), debug-location !30
-  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, undef $q1
+  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, $noreg, undef $q1
   ; CHECK:   MVE_VPST 8, implicit $vpr, debug-location !30
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 1, killed renamable $vpr, debug-location !30 :: (load (s64) from %ir.lsr.iv14, align 2)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg, debug-location !30 :: (load (s64) from %ir.lsr.iv14, align 2)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg, debug-location !30
-  ; CHECK:   renamable $q0 = MVE_VMOVLs16bh killed renamable $q0, 0, $noreg, undef renamable $q0, debug-location !30
-  ; CHECK:   renamable $q0 = MVE_VSUBi32 renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0, debug-location !32
+  ; CHECK:   renamable $q0 = MVE_VMOVLs16bh killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0, debug-location !30
+  ; CHECK:   renamable $q0 = MVE_VSUBi32 renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0, debug-location !32
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2, debug-location !29
   ; CHECK: bb.3.middle.block:
   ; CHECK:   successors: %bb.4(0x80000000)
   ; CHECK:   liveins: $q0, $q1, $r0, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg, debug-location !30
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, debug-location !32
-  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, debug-location !28
+  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg, $noreg, debug-location !30
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg, debug-location !32
+  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg, debug-location !28
   ; CHECK: bb.4.for.cond.cleanup:
   ; CHECK:   liveins: $r0, $r12
   ; CHECK:   DBG_VALUE $r12, $noreg, !20, !DIExpression(), debug-location !23
@@ -255,7 +255,7 @@ body:             |
     renamable $r3 = t2BICri killed renamable $r3, 3, 14, $noreg, $noreg, debug-location !32
     renamable $r4, dead $cpsr = tMOVi8 0, 14, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg, debug-location !32
-    renamable $q0 = MVE_VDUP32 killed renamable $r4, 0, $noreg, undef renamable $q0, debug-location !32
+    renamable $q0 = MVE_VDUP32 killed renamable $r4, 0, $noreg, $noreg, undef renamable $q0, debug-location !32
     renamable $q0 = MVE_VMOV_to_lane_32 killed renamable $q0, killed renamable $r12, 0, 14, $noreg, debug-location !32
     renamable $lr = nuw nsw t2ADDrs killed renamable $lr, renamable $r3, 19, 14, $noreg, $noreg, debug-location !32
     renamable $r3, dead $cpsr = tLSRri killed renamable $r3, 2, 14, $noreg, debug-location !32
@@ -266,15 +266,15 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, debug-location !34
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg, debug-location !34
     DBG_VALUE $vpr, $noreg, !17, !DIExpression(), debug-location !34
-    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, undef $q1
+    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, $noreg, undef $q1
     MVE_VPST 8, implicit $vpr, debug-location !34
-    renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 1, killed renamable $vpr, debug-location !34 :: (load (s64) from %ir.lsr.iv14, align 2)
+    renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg, debug-location !34 :: (load (s64) from %ir.lsr.iv14, align 2)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg, debug-location !34
-    renamable $q0 = MVE_VMOVLs16bh killed renamable $q0, 0, $noreg, undef renamable $q0, debug-location !34
+    renamable $q0 = MVE_VMOVLs16bh killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0, debug-location !34
     renamable $lr = t2LoopDec killed renamable $lr, 1, debug-location !33
-    renamable $q0 = MVE_VSUBi32 renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0, debug-location !38
+    renamable $q0 = MVE_VSUBi32 renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0, debug-location !38
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr, debug-location !33
     tB %bb.3, 14, $noreg, debug-location !33
 
@@ -282,9 +282,9 @@ body:             |
     successors: %bb.4(0x80000000)
     liveins: $q0, $q1, $r0, $r3
 
-    renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg, debug-location !34
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, debug-location !38
-    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, debug-location !32
+    renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg, $noreg, debug-location !34
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg, debug-location !38
+    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg, debug-location !32
 
   bb.4.for.cond.cleanup:
     liveins: $r0, $r12

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-vpt-debug.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-vpt-debug.mir
index a395a28992a1e..6d0688432a5a0 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-vpt-debug.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/skip-vpt-debug.mir
@@ -207,17 +207,17 @@ body:             |
   ; CHECK:   DBG_VALUE $r2, $noreg, !26, !DIExpression(), debug-location !29
   ; CHECK:   DBG_VALUE $r1, $noreg, !25, !DIExpression(), debug-location !29
   ; CHECK:   DBG_VALUE $r0, $noreg, !24, !DIExpression(), debug-location !29
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 1152, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 1152, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1, debug-location !31
   ; CHECK: bb.2.vector.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r2
   ; CHECK:   DBG_VALUE float 0x3810000000000000, $noreg, !27, !DIExpression(), debug-location !29
   ; CHECK:   DBG_VALUE $r2, $noreg, !26, !DIExpression(), debug-location !29
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, debug-location !32 :: (load (s128) from %ir.lsr.iv12, align 4, !tbaa !34)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg, debug-location !32 :: (load (s128) from %ir.lsr.iv12, align 4, !tbaa !34)
   ; CHECK:   DBG_VALUE $r0, $noreg, !24, !DIExpression(DW_OP_LLVM_entry_value, 1), debug-location !29
   ; CHECK:   MVE_VPTv4f32 8, renamable $q1, renamable $q0, 12, implicit-def $vpr, debug-location !40
-  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q1, 1, killed renamable $vpr, killed renamable $q0, debug-location !40
+  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q0, debug-location !40
   ; CHECK:   DBG_VALUE $r1, $noreg, !25, !DIExpression(DW_OP_LLVM_entry_value, 1), debug-location !29
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
@@ -275,7 +275,7 @@ body:             |
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg, debug-location !31
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r3 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg, debug-location !31
-    renamable $q0 = MVE_VMOVimmi32 1152, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 1152, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2DoLoopStartTP killed renamable $r3, renamable $r1, debug-location !31
 
   bb.2.vector.body (align 4):
@@ -284,12 +284,12 @@ body:             |
 
     DBG_VALUE float 0x3810000000000000, $noreg, !27, !DIExpression(), debug-location !29
     DBG_VALUE $r2, $noreg, !26, !DIExpression(), debug-location !29
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr, debug-location !32
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, debug-location !32 :: (load (s128) from %ir.lsr.iv12, align 4, !tbaa !34)
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg, debug-location !32 :: (load (s128) from %ir.lsr.iv12, align 4, !tbaa !34)
     DBG_VALUE $r0, $noreg, !24, !DIExpression(DW_OP_LLVM_entry_value, 1), debug-location !29
-    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, debug-location !40
-    renamable $q0 = MVE_VORR killed renamable $q1, renamable $q1, 1, killed renamable $vpr, killed renamable $q0, debug-location !40
+    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, $noreg, debug-location !40
+    renamable $q0 = MVE_VORR killed renamable $q1, renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q0, debug-location !40
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     DBG_VALUE $r1, $noreg, !25, !DIExpression(DW_OP_LLVM_entry_value, 1), debug-location !29
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/subreg-liveness.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/subreg-liveness.mir
index 8ab1ac25b1cd9..3b142e7ba2d41 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/subreg-liveness.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/subreg-liveness.mir
@@ -88,21 +88,21 @@ body:             |
   ; CHECK:   renamable $r0, dead $cpsr = tADDi8 killed renamable $r0, 3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r0 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r0, 19, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = t2DLS killed renamable $r0
   ; CHECK: bb.2.while.body (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.4(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r1, $r2
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr :: (load (s128) from %ir.y.addr.0161, align 4)
-  ; CHECK:   renamable $q0 = MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr, $lr :: (load (s128) from %ir.y.addr.0161, align 4)
+  ; CHECK:   renamable $q0 = MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $lr, undef renamable $q0
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK:   tB %bb.4, 14 /* CC::al */, $noreg
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.4(0x80000000)
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK: bb.4.while.end:
   ; CHECK:   liveins: $d0
   ; CHECK:   renamable $r0, renamable $r1 = VMOVRRD killed renamable $d0, 14 /* CC::al */, $noreg
@@ -131,17 +131,17 @@ body:             |
     renamable $r0, dead $cpsr = tADDi8 killed renamable $r0, 3, 14 /* CC::al */, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r0 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r0, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2DoLoopStartTP killed renamable $r0, renamable $r2
 
   bb.2.while.body (align 4):
     successors: %bb.2(0x7c000000), %bb.4(0x04000000)
     liveins: $lr, $q0:0x000000000000003C, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr :: (load (s128) from %ir.y.addr.0161, align 4)
-    renamable $q0 = MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr, $lr :: (load (s128) from %ir.y.addr.0161, align 4)
+    renamable $q0 = MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $lr, undef renamable $q0
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.4, 14 /* CC::al */, $noreg
@@ -149,7 +149,7 @@ body:             |
   bb.3:
     successors: %bb.4(0x80000000)
 
-    renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q0
 
   bb.4.while.end:
     liveins: $q0:0x000000000000000C

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unpredicated-max.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unpredicated-max.mir
index 5fd899ebfd1fc..57cfaa8813734 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unpredicated-max.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unpredicated-max.mir
@@ -98,10 +98,10 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r5, $r12
   ; CHECK:   $r3 = tMOVr $r12, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r3 = MVE_VMAXVs16 killed renamable $r3, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r3 = MVE_VMAXVs16 killed renamable $r3, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $lr = tMOVr $r5, 14 /* CC::al */, $noreg
   ; CHECK:   early-clobber renamable $r1 = t2STRH_POST killed renamable $r3, killed renamable $r1, 2, 14 /* CC::al */, $noreg :: (store (s16) into %ir.lsr.iv.2)
   ; CHECK:   renamable $r5, dead $cpsr = nsw tSUBi8 killed $r5, 1, 14 /* CC::al */, $noreg
@@ -140,10 +140,10 @@ body:             |
     liveins: $r0, $r1, $r2, $r5, $r12
 
     $r3 = tMOVr $r12, 14 /* CC::al */, $noreg
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-    renamable $r3 = MVE_VMAXVs16 killed renamable $r3, killed renamable $q0, 0, $noreg
+    renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+    renamable $r3 = MVE_VMAXVs16 killed renamable $r3, killed renamable $q0, 0, $noreg, $noreg
     $lr = tMOVr $r5, 14 /* CC::al */, $noreg
     early-clobber renamable $r1 = t2STRH_POST killed renamable $r3, killed renamable $r1, 2, 14 /* CC::al */, $noreg :: (store (s16) into %ir.lsr.iv.2)
     renamable $r5, dead $cpsr = nsw tSUBi8 killed $r5, 1, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unrolled-and-vector.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unrolled-and-vector.mir
index d4e1913c0af1a..7e2eda863d5da 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unrolled-and-vector.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unrolled-and-vector.mir
@@ -288,10 +288,10 @@ body:             |
   ; CHECK: bb.5.vector.body:
   ; CHECK:   successors: %bb.5(0x7c000000), %bb.11(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv46, align 1)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv4749, align 1)
-  ; CHECK:   renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv5052, align 1)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv46, align 1)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv4749, align 1)
+  ; CHECK:   renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv5052, align 1)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.5
   ; CHECK:   tB %bb.11, 14 /* CC::al */, $noreg
   ; CHECK: bb.6.for.body.preheader.new:
@@ -434,15 +434,15 @@ body:             |
     successors: %bb.5(0x7c000000), %bb.11(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 16, 14, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv46, align 1)
-    renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv4749, align 1)
+    renamable $r1, renamable $q0 = MVE_VLDRBU8_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv46, align 1)
+    renamable $r2, renamable $q1 = MVE_VLDRBU8_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv4749, align 1)
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv5052, align 1)
+    renamable $r0 = MVE_VSTRBU8_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv5052, align 1)
     t2LoopEnd renamable $lr, %bb.5, implicit-def dead $cpsr
     tB %bb.11, 14, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unsafe-retaining.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unsafe-retaining.mir
index 7e60d32dc8b8c..0e66773b77ddc 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unsafe-retaining.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/unsafe-retaining.mir
@@ -132,17 +132,17 @@ body:             |
   ; CHECK: bb.2.loop.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
-  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
+  ; CHECK:   renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
   ; CHECK:   renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VMVN killed renamable $q1, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VQSHRNbhs32 killed renamable $q0, killed renamable $q1, 15, 0, $noreg
+  ; CHECK:   renamable $q1 = MVE_VMVN killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VQSHRNbhs32 killed renamable $q0, killed renamable $q1, 15, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -170,18 +170,18 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     $lr = tMOVr $r12, 14 /* CC::al */, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
-    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
+    renamable $r0, renamable $q0 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
+    renamable $r1, renamable $q1 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
     renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-    renamable $q1 = MVE_VMVN killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMVN killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VQSHRNbhs32 killed renamable $q0, killed renamable $q1, 15, 0, $noreg
+    renamable $q0 = MVE_VQSHRNbhs32 killed renamable $q0, killed renamable $q1, 15, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+    renamable $r2 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 
@@ -238,17 +238,17 @@ body:             |
   ; CHECK: bb.2.loop.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VORN renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg
+  ; CHECK:   renamable $q0 = MVE_VORN renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+  ; CHECK:   renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -276,18 +276,18 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.b, align 4)
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr :: (load (s128) from %ir.addr.a, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.b, align 4)
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.addr.a, align 4)
     $lr = tMOVr $r12, 14 /* CC::al */, $noreg
     renamable $r12 = t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VORN renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VORN renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg
+    renamable $q1 = MVE_VQSHRUNs32th killed renamable $q1, killed renamable $q0, 3, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr :: (store (s128) into %ir.addr.c, align 4)
+    renamable $r2 = MVE_VSTRWU32_post killed renamable $q1, killed renamable $r2, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.addr.c, align 4)
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vaddv.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vaddv.mir
index 4ccc56c130729..2d1c743d1025c 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vaddv.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vaddv.mir
@@ -860,8 +860,8 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, killed $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, killed $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
@@ -894,10 +894,10 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
     early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
@@ -959,8 +959,8 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 0, killed $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r12 = MVE_VADDVs16no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 0, killed $noreg, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r12 = MVE_VADDVs16no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
@@ -993,10 +993,10 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-    renamable $r12 = MVE_VADDVs16no_acc killed renamable $q0, 0, $noreg
+    renamable $r0, renamable $q0 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+    renamable $r12 = MVE_VADDVs16no_acc killed renamable $q0, 0, $noreg, $noreg
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
     early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
@@ -1058,8 +1058,8 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRBU8_post killed renamable $r0, 16, 0, killed $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $r12 = MVE_VADDVs8no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRBU8_post killed renamable $r0, 16, 0, killed $noreg, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $r12 = MVE_VADDVs8no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
@@ -1092,10 +1092,10 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-    renamable $r12 = MVE_VADDVs8no_acc killed renamable $q0, 0, $noreg
+    renamable $r0, renamable $q0 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+    renamable $r12 = MVE_VADDVs8no_acc killed renamable $q0, 0, $noreg, $noreg
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
     early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
@@ -1155,8 +1155,8 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r2
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, killed $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 0, killed $noreg, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r2
@@ -1193,13 +1193,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -1271,12 +1271,12 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
   ; CHECK:   early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
@@ -1311,12 +1311,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
     early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
@@ -1381,14 +1381,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
   ; CHECK:   $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r2
@@ -1425,14 +1425,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r0, renamable $q0 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -1504,12 +1504,12 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
   ; CHECK:   early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
@@ -1544,12 +1544,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
     early-clobber renamable $r1 = t2STR_POST killed renamable $r12, killed renamable $r1, 4, 14 /* CC::al */, $noreg :: (store (s32) into %ir.store.addr)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
@@ -1614,14 +1614,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
   ; CHECK:   $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.exit:
   ; CHECK:   liveins: $r2
@@ -1658,14 +1658,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r0, renamable $q0 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
     $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMVN killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -1736,18 +1736,18 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   dead $lr = t2DLS renamable $r12
   ; CHECK:   $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2SXTH killed renamable $r12, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
@@ -1778,7 +1778,7 @@ body:             |
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r12
     $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
 
@@ -1786,12 +1786,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r4
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 1)
-    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 1)
+    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+    renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2SXTH killed renamable $r12, 0, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
@@ -1864,12 +1864,12 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r3, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2SXTAH killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 8, 14 /* CC::al */, $noreg
@@ -1911,12 +1911,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r3, $r4
 
-    renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 1)
-    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRBS16_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 1)
+    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+    renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2SXTAH killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 8, 14 /* CC::al */, $noreg
@@ -1990,18 +1990,18 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   dead $lr = t2DLS renamable $r12
   ; CHECK:   $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2UXTH killed renamable $r12, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
@@ -2031,7 +2031,7 @@ body:             |
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r12
     $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
 
@@ -2039,12 +2039,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r4
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+    renamable $r12 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2UXTH killed renamable $r12, 0, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
@@ -2117,12 +2117,12 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r3, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2UXTAH killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 8, 14 /* CC::al */, $noreg
@@ -2164,12 +2164,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r3, $r4
 
-    renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 2)
-    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 2)
+    renamable $q1 = MVE_VSUBi16 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg
+    renamable $r2 = MVE_VADDVu16no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2UXTAH killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 8, 14 /* CC::al */, $noreg
@@ -2243,18 +2243,18 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 27, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   dead $lr = t2DLS renamable $r12
   ; CHECK:   $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2SXTB killed renamable $r12, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 16, 14 /* CC::al */, $noreg
@@ -2284,7 +2284,7 @@ body:             |
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 27, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r12
     $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
 
@@ -2292,12 +2292,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r4
 
-    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+    renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2SXTB killed renamable $r12, 0, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 16, 14 /* CC::al */, $noreg
@@ -2370,12 +2370,12 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r3, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2SXTAB killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
@@ -2417,12 +2417,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r3, $r4
 
-    renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+    renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2SXTAB killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
@@ -2496,18 +2496,18 @@ body:             |
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 27, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
   ; CHECK:   dead $lr = t2DLS renamable $r12
   ; CHECK:   $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r2, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2UXTB killed renamable $r12, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 16, 14 /* CC::al */, $noreg
@@ -2537,7 +2537,7 @@ body:             |
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $r12 = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 27, 14 /* CC::al */, $noreg, $noreg
     renamable $r3 = tADDrSPi $sp, 2, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from %fixed-stack.0, align 8)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from %fixed-stack.0, align 8)
     $lr = t2DoLoopStart renamable $r12
     $r4 = tMOVr killed $r12, 14 /* CC::al */, $noreg
 
@@ -2545,12 +2545,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r2, $r4
 
-    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+    renamable $r12 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2UXTB killed renamable $r12, 0, 14 /* CC::al */, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 16, 14 /* CC::al */, $noreg
@@ -2623,12 +2623,12 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q0, $r0, $r1, $r3, $r4
-  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+  ; CHECK:   renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
   ; CHECK:   renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2UXTAB killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
@@ -2670,12 +2670,12 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q0, $r0, $r1, $r3, $r4
 
-    renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv17, align 1)
-    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $r0, renamable $q1 = MVE_VLDRBU8_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv17, align 1)
+    renamable $q1 = MVE_VEOR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     $lr = tMOVr $r4, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg
+    renamable $r2 = MVE_VADDVu8no_acc killed renamable $q1, 0, $noreg, $noreg
     renamable $r4, dead $cpsr = nsw tSUBi8 killed $r4, 1, 14 /* CC::al */, $noreg
     renamable $r3 = t2UXTAB killed renamable $r3, killed renamable $r2, 0, 14 /* CC::al */, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
@@ -2744,10 +2744,10 @@ body:             |
   ; CHECK: bb.2.while.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r12
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 0, $noreg :: (load (s64) from %ir.tmp3, align 2)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU32_post killed renamable $r0, 8, 0, killed $noreg :: (load (s64) from %ir.tmp1, align 2)
-  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r12 = MVE_VADDVu32acc killed renamable $r12, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 0, $noreg, $noreg :: (load (s64) from %ir.tmp3, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU32_post killed renamable $r0, 8, 0, killed $noreg, $noreg :: (load (s64) from %ir.tmp1, align 2)
+  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r12 = MVE_VADDVu32acc killed renamable $r12, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.while.end:
   ; CHECK:   liveins: $r12
@@ -2787,13 +2787,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 1, renamable $vpr :: (load (s64) from %ir.tmp3, align 2)
-    renamable $r0, renamable $q1 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr :: (load (s64) from %ir.tmp1, align 2)
-    renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q0 = MVE_VLDRHU32_post killed renamable $r1, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.tmp3, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHU32_post killed renamable $r0, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.tmp1, align 2)
+    renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r2, dead $cpsr = nsw tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu32acc killed renamable $r12, killed renamable $q0, 0, $noreg
+    renamable $r12 = MVE_VADDVu32acc killed renamable $r12, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -2859,10 +2859,10 @@ body:             |
   ; CHECK: bb.2.while.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r3
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.tmp3, align 2)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 0, killed $noreg :: (load (s128) from %ir.tmp1, align 2)
-  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r12 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.tmp3, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 0, killed $noreg, $noreg :: (load (s128) from %ir.tmp1, align 2)
+  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r12 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   renamable $r3 = t2UXTAH killed renamable $r3, killed renamable $r12, 0, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.while.end:
@@ -2903,13 +2903,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.tmp3, align 2)
-    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.tmp1, align 2)
-    renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.tmp3, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.tmp1, align 2)
+    renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r2, dead $cpsr = nsw tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
-    renamable $r12 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+    renamable $r12 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     renamable $r3 = t2UXTAH killed renamable $r3, killed renamable $r12, 0, 14 /* CC::al */, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -2983,15 +2983,15 @@ body:             |
   ; CHECK: bb.2.while.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.tmp3, align 2)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.tmp1, align 2)
-  ; CHECK:   renamable $q2 = MVE_VMULLBs16 renamable $q1, renamable $q0, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q2, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.tmp3, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.tmp1, align 2)
+  ; CHECK:   renamable $q2 = MVE_VMULLBs16 renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.while.end:
   ; CHECK:   liveins: $r2
@@ -3031,15 +3031,15 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.tmp3, align 2)
-    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.tmp1, align 2)
-    renamable $q2 = MVE_VMULLBs16 renamable $q1, renamable $q0, 0, $noreg, undef renamable $q2
-    renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q2, 0, $noreg, undef renamable $q0
+    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.tmp3, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.tmp1, align 2)
+    renamable $q2 = MVE_VMULLBs16 renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -3108,15 +3108,15 @@ body:             |
   ; CHECK: bb.2.while.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $r0, $r1, $r2, $r3, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.tmp3, align 2)
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.tmp1, align 2)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.tmp3, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.tmp1, align 2)
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r12 = nsw t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.while.end:
   ; CHECK:   liveins: $r2
@@ -3157,15 +3157,15 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.tmp3, align 2)
-    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr :: (load (s128) from %ir.tmp1, align 2)
+    renamable $r1, renamable $q0 = MVE_VLDRHU16_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.tmp3, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.tmp1, align 2)
     $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMULLTs16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r12 = nsw t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = nsw tSUBi8 killed renamable $r3, 8, 14 /* CC::al */, $noreg
-    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg
+    renamable $r2 = MVE_VADDVu32acc killed renamable $r2, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vcmp-vpst-combination-across-blocks.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vcmp-vpst-combination-across-blocks.mir
index 159a89497c6a3..4a30fa2251d2a 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vcmp-vpst-combination-across-blocks.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vcmp-vpst-combination-across-blocks.mir
@@ -70,15 +70,15 @@ body:             |
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, $noreg, killed renamable $q0
   ; CHECK:   MVE_VPTv4f32 8, renamable $q1, renamable $q0, 12, implicit-def $vpr
-  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q1, 1, killed renamable $vpr, killed renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -89,7 +89,7 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = nuw nsw tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2ANDri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $lr = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -101,15 +101,15 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2
 
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, renamable $q0
-    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr
+    renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, renamable $q0
+    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VORR killed renamable $q1, renamable $q1, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = MVE_VORR killed renamable $q1, renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q0
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.8, 14 /* CC::al */, $noreg
 
@@ -158,17 +158,17 @@ body:             |
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $q2, $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q2, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg
-  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, killed renamable $q2
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, $noreg, killed renamable $q2
   ; CHECK:   MVE_VPTv4f32 8, renamable $q1, renamable $q0, 12, implicit-def $vpr
-  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, killed renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   dead renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, killed renamable $vpr, killed renamable $q1
+  ; CHECK:   dead renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, killed renamable $vpr, $noreg, killed renamable $q1
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -179,7 +179,7 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = nuw nsw tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2ANDri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $lr = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -191,16 +191,16 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2, $q2
 
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr
-    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, killed renamable $q2
-    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
+    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2
+    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, killed renamable $q2
+    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, renamable $q1
+    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, renamable $q1
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.8, 14 /* CC::al */, $noreg
 
@@ -249,16 +249,16 @@ body:             |
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $q2, $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q2, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg
   ; CHECK:   MVE_VPTv4f32 8, renamable $q1, renamable $q0, 12, implicit-def $vpr
-  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, killed renamable $vpr, killed renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q2
   ; CHECK:   MVE_VPTv4f32 8, renamable $q2, renamable $q1, 12, implicit-def $vpr
-  ; CHECK:   dead renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, killed renamable $vpr, killed renamable $q1
+  ; CHECK:   dead renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, killed renamable $vpr, $noreg, killed renamable $q1
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -269,7 +269,7 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = nuw nsw tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2ANDri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $lr = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -281,16 +281,16 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2, $q2
 
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr
-    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, killed renamable $q2
-    renamable $vpr = MVE_VCMPf32 renamable $q2, renamable $q1, 12, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, $noreg
+    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2
+    renamable $vpr = MVE_VCMPf32 renamable $q2, renamable $q1, 12, 1, killed renamable $vpr, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, renamable $q1
+    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, renamable $q1
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.8, 14 /* CC::al */, $noreg
 
@@ -339,15 +339,15 @@ body:             |
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $q2, $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q2, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg
-  ; CHECK:   renamable $q2 = MVE_VORR killed renamable $q2, renamable $q1, 0, $noreg, killed renamable $q2
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q2 = MVE_VORR killed renamable $q2, renamable $q1, 0, $noreg, $noreg, killed renamable $q2
   ; CHECK:   MVE_VPTv4f32 8, renamable $q0, killed renamable $q1, 12, implicit-def $vpr
-  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q0, killed renamable $q0, 1, killed renamable $vpr, killed renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q0, killed renamable $q0, 1, killed renamable $vpr, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -358,7 +358,7 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = nuw nsw tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2ANDri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $lr = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -370,14 +370,14 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2, $q2
 
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr
-    renamable $vpr = MVE_VCMPf32 renamable $q0, renamable $q1, 12, 1, killed renamable $vpr
-    renamable $q2 = MVE_VORR renamable $q2, killed renamable $q1, 0, $noreg, killed renamable $q2
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
+    renamable $vpr = MVE_VCMPf32 renamable $q0, renamable $q1, 12, 1, killed renamable $vpr, $noreg
+    renamable $q2 = MVE_VORR renamable $q2, killed renamable $q1, 0, $noreg, $noreg, killed renamable $q2
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VORR renamable $q0, renamable $q0, 1, renamable $vpr, killed renamable $q0
+    renamable $q0 = MVE_VORR renamable $q0, renamable $q0, 1, renamable $vpr, $noreg, killed renamable $q0
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.8, 14 /* CC::al */, $noreg
 
@@ -426,17 +426,17 @@ body:             |
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg
-  ; CHECK:   renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 0, $noreg, killed renamable $q1
-  ; CHECK:   renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 0, killed $noreg
-  ; CHECK:   renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, killed renamable $q0
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 0, $noreg, $noreg, killed renamable $q1
+  ; CHECK:   renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 0, killed $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, $noreg, killed renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q1, 1, killed renamable $vpr, killed renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q1, killed renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -447,7 +447,7 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = nuw nsw tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2ANDri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $lr = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -459,16 +459,16 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2
 
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, renamable $q1
-    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr
-    renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, killed renamable $q0
+    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, renamable $q1
+    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, $noreg
+    renamable $q0 = MVE_VORR renamable $q1, renamable $q1, 0, $noreg, $noreg, killed renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VORR killed renamable $q1, renamable $q1, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = MVE_VORR killed renamable $q1, renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q0
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.8, 14 /* CC::al */, $noreg
 
@@ -517,16 +517,16 @@ body:             |
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $q2, $r0, $r1
-  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VDUP32 renamable $r1, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.1 (align 4):
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q2, $r0
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 0, $noreg, $noreg
   ; CHECK:   MVE_VPTv4f32 8, renamable $q1, renamable $q0, 12, implicit-def $vpr
-  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, killed renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   dead renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, killed renamable $vpr, killed renamable $q1
+  ; CHECK:   dead renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, killed renamable $vpr, $noreg, killed renamable $q1
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
   ; CHECK: bb.2:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -537,7 +537,7 @@ body:             |
     liveins: $r0, $r1, $r2
 
     renamable $r3, dead $cpsr = nuw nsw tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VDUP32 killed renamable $r1, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2ANDri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $lr = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -549,15 +549,15 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2, $q2
 
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr
+    renamable $r0, renamable $q1 = MVE_VLDRWU32_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr
-    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, killed renamable $q2
+    renamable $vpr = MVE_VCMPf32 renamable $q1, renamable $q0, 12, 1, killed renamable $vpr, $noreg
+    renamable $q2 = MVE_VORR renamable $q1, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, renamable $q1
+    renamable $q1 = MVE_VORR killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, renamable $q1
     t2LoopEnd renamable $lr, %bb.6, implicit-def dead $cpsr
     tB %bb.8, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-add-operand-liveout.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-add-operand-liveout.mir
index 86481962f3ce2..32ea68ab3312a 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-add-operand-liveout.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-add-operand-liveout.mir
@@ -123,7 +123,7 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -133,23 +133,23 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q1, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
-  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
   ; CHECK:   $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0, $q1, $r2
   ; CHECK:   renamable $r0, dead $cpsr = tADDi3 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r0, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x80000000)
@@ -169,7 +169,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r7, -8
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3 = t2BICri killed renamable $r3, 3, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14, $noreg
@@ -181,16 +181,16 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q1, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
-    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, undef $q0
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
+    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, $noreg, undef $q0
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r3, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -199,9 +199,9 @@ body:             |
     liveins: $q0, $q1, $r2
 
     renamable $r0, dead $cpsr = tADDi3 killed renamable $r2, 4, 14, $noreg
-    renamable $vpr = MVE_VCTP32 killed renamable $r0, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCTP32 killed renamable $r0, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt-2.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt-2.mir
index 989a58e8a1162..1fb505bbfc7c7 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt-2.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt-2.mir
@@ -127,12 +127,12 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r3
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
   ; CHECK:   MVE_VPTv4i32r 8, renamable $q0, $zr, 1, implicit-def $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.bb27:
@@ -171,16 +171,16 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 4, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $vpr = MVE_VCMPi32r renamable $q0, $zr, 1, 1, killed renamable $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $vpr = MVE_VCMPi32r renamable $q0, $zr, 1, 1, killed renamable $vpr, $noreg
+    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt.mir
index 5be38d4f740c5..0615fce40b668 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-in-vpt.mir
@@ -158,11 +158,11 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r3
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.bb27:
@@ -201,13 +201,13 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -296,14 +296,14 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg
   ; CHECK:   VSTR_P0_off renamable $vpr, $sp, 0, 14 /* CC::al */, $noreg :: (store (s32) into %stack.0)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.bb27:
@@ -342,14 +342,14 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg
     VSTR_P0_off renamable $vpr, $sp, 0, 14, $noreg :: (store (s32) into %stack.0)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -438,14 +438,14 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, killed renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+  ; CHECK:   renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, killed renamable $vpr, $noreg
   ; CHECK:   $vpr = VMSR_P0 $r3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.bb27:
@@ -484,14 +484,14 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg
     $vpr = VMSR_P0 $r3, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -580,14 +580,14 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-  ; CHECK:   dead renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+  ; CHECK:   dead renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg
   ; CHECK:   $r3 = VMRS_P0 $vpr, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
   ; CHECK:   $r0 = tMOVr $r3, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.bb27:
@@ -626,14 +626,14 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 2, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr
-    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg
+    renamable $r3, renamable $q1 = MVE_VLDRWU32_post killed renamable $r3, 16, 1, renamable $vpr, $noreg
     $r3 = VMRS_P0 $vpr, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VMULi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     $r0 = tMOVr $r3, 14, $noreg
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subi3.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subi3.mir
index a70c6f13239f1..a9f4d7c1f8126 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subi3.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subi3.mir
@@ -115,10 +115,10 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -149,14 +149,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3, dead $cpsr = tSUBi3 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri.mir
index 639dc610b75bb..d995f11b6c0e1 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri.mir
@@ -114,10 +114,10 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -148,14 +148,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3 = t2SUBri killed renamable $r3, 4, 14, $noreg, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri12.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri12.mir
index 9d6ea1cebd9b3..48e161ded90fd 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri12.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp-subri12.mir
@@ -114,10 +114,10 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
-  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
-  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
+  ; CHECK:   renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+  ; CHECK:   renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
+  ; CHECK:   renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -148,14 +148,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv13, align 4)
-    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1416, align 4)
+    renamable $r1, renamable $q0 = MVE_VLDRWU32_post killed renamable $r1, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv13, align 4)
+    renamable $r2, renamable $q1 = MVE_VLDRWU32_post killed renamable $r2, 16, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1416, align 4)
     renamable $r3 = t2SUBri12 killed renamable $r3, 4, 14, $noreg
-    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nsw MVE_VADDi32 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1719, align 4)
+    renamable $r0 = MVE_VSTRWU32_post killed renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1719, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp16-reduce.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp16-reduce.mir
index 11a9eb62574c1..3b2e776e13dc7 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp16-reduce.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vctp16-reduce.mir
@@ -135,25 +135,25 @@ body:             |
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r3, renamable $r12, 27, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2LSRri killed renamable $r12, 3, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   renamable $r3 = t2SUBrs renamable $r2, killed renamable $r12, 26, 14 /* CC::al */, $noreg, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
-  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, undef $q1
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, $noreg, undef $q1
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv19, align 1)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv2022, align 1)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv19, align 1)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv2022, align 1)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0, $q1, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $sp = t2LDMIA_UPD $sp, 14 /* CC::al */, $noreg, def $r7, def $lr
   ; CHECK:   renamable $r0 = tSXTH killed renamable $r0, 14 /* CC::al */, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit killed $r0
@@ -186,7 +186,7 @@ body:             |
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, renamable $r12, 27, 14, $noreg, $noreg
     renamable $r3 = tLEApcrel %const.0, 14, $noreg
     renamable $r12 = t2LSRri killed renamable $r12, 3, 14, $noreg, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     renamable $r3 = t2SUBrs renamable $r2, killed renamable $r12, 26, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $lr
 
@@ -194,24 +194,24 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
-    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, undef $q1
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
+    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, $noreg, undef $q1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv19, align 1)
-    renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv2022, align 1)
+    renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv19, align 1)
+    renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv2022, align 1)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14, $noreg
-    renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
 
   bb.3.middle.block:
     liveins: $q0, $q1, $r3
 
-    renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
-    renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
     $sp = t2LDMIA_UPD $sp, 14, $noreg, def $r7, def $lr
     renamable $r0 = tSXTH killed renamable $r0, 14, $noreg
     tBX_RET 14, $noreg, implicit killed $r0

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vector_spill_in_loop.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vector_spill_in_loop.mir
index 8501727fb8b62..09ad938b529b1 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vector_spill_in_loop.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vector_spill_in_loop.mir
@@ -29,13 +29,29 @@ body:             |
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r10, $r11, $r12
-  ; CHECK:   renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 0, $noreg
-  ; CHECK:   renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 0, $noreg
-  ; CHECK:   renamable $q5 = MVE_VSHR_immu16 killed renamable $q3, 11, 0, $noreg, undef renamable $q5
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg :: (store (s128) into %stack.0, align 8)
-  ; CHECK:   dead renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg :: (load (s128) from %stack.0, align 8)
-  ; CHECK:   dead renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 0, killed $noreg
+  ; CHECK:   renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 0, $noreg, $noreg
+  ; CHECK:   renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 0, $noreg, $noreg
+  ; CHECK:   renamable $q5 = MVE_VSHR_immu16 killed renamable $q3, 11, 0, $noreg, $noreg, undef renamable $q5
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg, $noreg :: (store (s128) into %stack.0, align 8)
+  ; CHECK:   dead renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg, $noreg :: (load (s128) from %stack.0, align 8)
+  ; CHECK:   dead renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 0, killed $noreg, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.1
+  ; CHECK: bb.2:
+  ; CHECK:   successors: %bb.3(0x04000000), %bb.0(0x7c000000)
+  ; CHECK:   liveins: $q0, $r1, $r2, $r3, $r4, $r5, $r7, $r8, $r10, $r11, $r12
+  ; CHECK:   renamable $r0 = tLDRspi $sp, 1, 14 /* CC::al */, $noreg
+  ; CHECK:   renamable $r10 = nuw t2ADDri killed renamable $r10, 1, 14 /* CC::al */, $noreg, $noreg
+  ; CHECK:   renamable $r12 = t2ADDrs killed renamable $r12, killed renamable $r0, 10, 14 /* CC::al */, $noreg, $noreg
+  ; CHECK:   renamable $r0 = tLDRspi $sp, 3, 14 /* CC::al */, $noreg
+  ; CHECK:   renamable $r2 = t2ADDrs killed renamable $r2, killed renamable $r0, 10, 14 /* CC::al */, $noreg, $noreg
+  ; CHECK:   renamable $r0 = tLDRspi $sp, 2, 14 /* CC::al */, $noreg
+  ; CHECK:   tCMPhir renamable $r10, killed renamable $r0, 14 /* CC::al */, $noreg, implicit-def $cpsr
+  ; CHECK:   tBcc %bb.0, 1 /* CC::ne */, killed $cpsr
+  ; CHECK: bb.3:
+  ; CHECK:   $sp = frame-destroy tADDspi $sp, 24, 14 /* CC::al */, $noreg
+  ; CHECK:   $sp = frame-destroy VLDMDIA_UPD $sp, 14 /* CC::al */, $noreg, def $d8, def $d9, def $d10, def $d11, def $d12, def $d13, def $d14, def $d15
+  ; CHECK:   $sp = frame-destroy tADDspi $sp, 1, 14 /* CC::al */, $noreg
+  ; CHECK:   $sp = frame-destroy t2LDMIA_RET $sp, 14 /* CC::al */, $noreg, def $r4, def $r5, def $r6, def $r7, def $r8, def $r9, def $r10, def $r11, def $pc
   bb.0:
     successors: %bb.1(0x80000000)
     liveins: $q0, $r1, $r2, $r3, $r4, $r5, $r7, $r8, $r10, $r11, $r12
@@ -49,18 +65,18 @@ body:             |
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r9, $r10, $r11, $r12
 
-    renamable $vpr = MVE_VCTP16 renamable $r9, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r9, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, renamable $vpr
-    renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 1, renamable $vpr
+    renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
+    renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 1, renamable $vpr, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q5 = MVE_VSHR_immu16 renamable $q3, 11, 1, renamable $vpr, undef renamable $q5
+    renamable $q5 = MVE_VSHR_immu16 renamable $q3, 11, 1, renamable $vpr, $noreg, undef renamable $q5
     renamable $r9 = nsw t2SUBri killed renamable $r9, 8, 14 /* CC::al */, $noreg, $noreg
-    MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg :: (store (s128) into %stack.0, align 8)
+    MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg, $noreg :: (store (s128) into %stack.0, align 8)
     MVE_VPST 8, implicit $vpr
-    renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg :: (load (s128) from %stack.0, align 8)
+    renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg, $noreg :: (load (s128) from %stack.0, align 8)
     MVE_VPST 1, implicit $vpr
-    renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.1, implicit-def dead $cpsr
     tB %bb.2, 14 /* CC::al */, $noreg
 
@@ -102,19 +118,36 @@ body:             |
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.1(0x7c000000), %bb.2(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r9, $r10, $r11, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r9, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r9, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, renamable $vpr
-  ; CHECK:   renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 1, renamable $vpr
+  ; CHECK:   renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
+  ; CHECK:   renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 1, renamable $vpr, $noreg
   ; CHECK:   MVE_VPST 2, implicit $vpr
-  ; CHECK:   renamable $q5 = MVE_VSHR_immu16 killed renamable $q3, 11, 1, renamable $vpr, undef renamable $q5
+  ; CHECK:   renamable $q5 = MVE_VSHR_immu16 killed renamable $q3, 11, 1, renamable $vpr, $noreg, undef renamable $q5
   ; CHECK:   renamable $r9 = nsw t2SUBri killed renamable $r9, 8, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg :: (store (s128) into %stack.0, align 8)
+  ; CHECK:   MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg, $noreg :: (store (s128) into %stack.0, align 8)
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   dead renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg :: (load (s128) from %stack.0, align 8)
+  ; CHECK:   dead renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg, $noreg :: (load (s128) from %stack.0, align 8)
   ; CHECK:   MVE_VPST 1, implicit $vpr
-  ; CHECK:   dead renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 1, killed renamable $vpr
+  ; CHECK:   dead renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.1
+  ; CHECK: bb.2:
+  ; CHECK:   successors: %bb.3(0x04000000), %bb.0(0x7c000000)
+  ; CHECK:   liveins: $q0, $r1, $r2, $r3, $r4, $r5, $r7, $r8, $r10, $r11, $r12
+  ; CHECK:   dead renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg, $noreg :: (load (s128) from %stack.0, align 8)
+  ; CHECK:   renamable $r0 = tLDRspi $sp, 1, 14 /* CC::al */, $noreg
+  ; CHECK:   renamable $r10 = nuw t2ADDri killed renamable $r10, 1, 14 /* CC::al */, $noreg, $noreg
+  ; CHECK:   renamable $r12 = t2ADDrs killed renamable $r12, killed renamable $r0, 10, 14 /* CC::al */, $noreg, $noreg
+  ; CHECK:   renamable $r0 = tLDRspi $sp, 3, 14 /* CC::al */, $noreg
+  ; CHECK:   renamable $r2 = t2ADDrs killed renamable $r2, killed renamable $r0, 10, 14 /* CC::al */, $noreg, $noreg
+  ; CHECK:   renamable $r0 = tLDRspi $sp, 2, 14 /* CC::al */, $noreg
+  ; CHECK:   tCMPhir renamable $r10, killed renamable $r0, 14 /* CC::al */, $noreg, implicit-def $cpsr
+  ; CHECK:   tBcc %bb.0, 1 /* CC::ne */, killed $cpsr
+  ; CHECK: bb.3:
+  ; CHECK:   $sp = frame-destroy tADDspi $sp, 24, 14 /* CC::al */, $noreg
+  ; CHECK:   $sp = frame-destroy VLDMDIA_UPD $sp, 14 /* CC::al */, $noreg, def $d8, def $d9, def $d10, def $d11, def $d12, def $d13, def $d14, def $d15
+  ; CHECK:   $sp = frame-destroy tADDspi $sp, 1, 14 /* CC::al */, $noreg
+  ; CHECK:   $sp = frame-destroy t2LDMIA_RET $sp, 14 /* CC::al */, $noreg, def $r4, def $r5, def $r6, def $r7, def $r8, def $r9, def $r10, def $r11, def $pc
   bb.0:
     successors: %bb.1(0x80000000)
     liveins: $q0, $r1, $r2, $r3, $r4, $r5, $r7, $r8, $r10, $r11, $r12
@@ -128,18 +161,18 @@ body:             |
     successors: %bb.1(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3, $r4, $r5, $r6, $r7, $r8, $r9, $r10, $r11, $r12
 
-    renamable $vpr = MVE_VCTP16 renamable $r9, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r9, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, renamable $vpr
-    renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 1, renamable $vpr
+    renamable $r0, renamable $q6 = MVE_VLDRHU16_post killed renamable $r0, 16, 1, renamable $vpr, $noreg
+    renamable $q3 = MVE_VLDRHU16 renamable $r6, 0, 1, renamable $vpr, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q5 = MVE_VSHR_immu16 renamable $q3, 11, 1, renamable $vpr, undef renamable $q5
+    renamable $q5 = MVE_VSHR_immu16 renamable $q3, 11, 1, renamable $vpr, $noreg, undef renamable $q5
     renamable $r9 = nsw t2SUBri killed renamable $r9, 8, 14 /* CC::al */, $noreg, $noreg
-    MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg :: (store (s128) into %stack.0, align 8)
+    MVE_VSTRWU32 killed renamable $q5, $sp, 80, 0, $noreg, $noreg :: (store (s128) into %stack.0, align 8)
     MVE_VPST 8, implicit $vpr
-    renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg :: (load (s128) from %stack.0, align 8)
+    renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg, $noreg :: (load (s128) from %stack.0, align 8)
     MVE_VPST 1, implicit $vpr
-    renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPi16r killed renamable $q6, renamable $r8, 1, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.1, implicit-def dead $cpsr
     tB %bb.2, 14 /* CC::al */, $noreg
 
@@ -147,7 +180,7 @@ body:             |
     successors: %bb.3(0x04000000), %bb.0(0x7c000000)
     liveins: $q0, $r1, $r2, $r3, $r4, $r5, $r7, $r8, $r10, $r11, $r12
 
-    renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg :: (load (s128) from %stack.0, align 8)
+    renamable $q7 = MVE_VLDRWU32 $sp, 80, 0, $noreg, $noreg :: (load (s128) from %stack.0, align 8)
     renamable $r0 = tLDRspi $sp, 1, 14 /* CC::al */, $noreg
     renamable $r10 = nuw t2ADDri killed renamable $r10, 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2ADDrs killed renamable $r12, killed renamable $r0, 10, 14 /* CC::al */, $noreg, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmaxmin_vpred_r.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmaxmin_vpred_r.mir
index d7634a767fd81..e6e6834fe0087 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmaxmin_vpred_r.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmaxmin_vpred_r.mir
@@ -156,20 +156,20 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
   ; CHECK:   $r7, $r6 = t2LDRDi8 $sp, 36, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.4, align 8), (load (s32) from %fixed-stack.5)
   ; CHECK:   $r5, $r4 = t2LDRDi8 $sp, 20, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0, align 8), (load (s32) from %fixed-stack.1)
-  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r6, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r7, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r6, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r7, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK: bb.2.for.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 0, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
-  ; CHECK:   renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 0, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
-  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, undef renamable $q3
-  ; CHECK:   renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r4, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post killed renamable $q2, killed renamable $r5, 4, 0, killed $noreg :: (store (s128) into %ir.output_cast, align 4)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 0, $noreg, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
+  ; CHECK:   renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 0, $noreg, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
+  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, $noreg, undef renamable $q3
+  ; CHECK:   renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r4, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $r5 = MVE_VSTRWU32_post killed renamable $q2, killed renamable $r5, 4, 0, killed $noreg, $noreg :: (store (s128) into %ir.output_cast, align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   $r0, dead $cpsr = tMOVi8 0, 14 /* CC::al */, $noreg
@@ -197,27 +197,27 @@ body:             |
 
     $r7, $r6 = t2LDRDi8 $sp, 36, 14, $noreg :: (load (s32) from %fixed-stack.2, align 8), (load (s32) from %fixed-stack.1)
     $r5, $r4 = t2LDRDi8 $sp, 20, 14, $noreg :: (load (s32) from %fixed-stack.6, align 8), (load (s32) from %fixed-stack.5)
-    renamable $q0 = MVE_VDUP32 killed renamable $r6, 0, $noreg, undef renamable $q0
-    renamable $q1 = MVE_VDUP32 killed renamable $r7, 0, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VDUP32 killed renamable $r6, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VDUP32 killed renamable $r7, 0, $noreg, $noreg, undef renamable $q1
 
   bb.2.for.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r12, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr :: (load (s128) from %ir.input_2_cast, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr :: (load (s128) from %ir.input_1_cast, align 4)
-    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, undef renamable $q2
-    renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, undef renamable $q3
+    renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
+    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, $noreg, undef renamable $q3
     renamable $r12 = t2SUBri killed renamable $r12, 4, 14, $noreg, $noreg
-    renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, undef renamable $q2
-    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r4, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMULi32 killed renamable $q3, killed renamable $q2, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r4, 0, $noreg, $noreg, undef renamable $q2
     MVE_VPST 2, implicit $vpr
-    renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 1, renamable $vpr, undef renamable $q2
-    renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 1, renamable $vpr, undef renamable $q2
-    renamable $r5 = MVE_VSTRWU32_post killed renamable $q2, killed renamable $r5, 4, 1, killed renamable $vpr :: (store (s128) into %ir.output_cast, align 4)
+    renamable $q2 = MVE_VMAXu32 killed renamable $q2, renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMINu32 killed renamable $q2, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $r5 = MVE_VSTRWU32_post killed renamable $q2, killed renamable $r5, 4, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.output_cast, align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmldava_in_vpt.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmldava_in_vpt.mir
index 974929b7ddc90..55ea864d31c79 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmldava_in_vpt.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vmldava_in_vpt.mir
@@ -152,20 +152,20 @@ body:             |
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
   ; CHECK:   renamable $r5 = tLDRspi $sp, 4, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0, align 8)
   ; CHECK:   $r6, $r12 = t2LDRDi8 $sp, 28, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.3), (load (s32) from %fixed-stack.4, align 8)
-  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r12, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VDUP32 killed renamable $r12, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r12 = t2MOVi 0, 14 /* CC::al */, $noreg, $noreg
   ; CHECK: bb.2.for.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3, $r5, $r12
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 0, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
-  ; CHECK:   renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 0, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
-  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, undef renamable $q3
-  ; CHECK:   renamable $q3 = MVE_VMLAS_qr_u32 killed renamable $q3, killed renamable $q2, renamable $r5, 0, $noreg
-  ; CHECK:   renamable $q2 = MVE_VMAXu32 killed renamable $q3, renamable $q1, 0, $noreg, undef renamable $q2
-  ; CHECK:   renamable $q3 = MVE_VMINu32 renamable $q2, renamable $q0, 0, $noreg, undef renamable $q3
-  ; CHECK:   renamable $r12 = MVE_VMLADAVas32 killed renamable $r12, killed renamable $q3, killed renamable $q2, 0, killed $noreg
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 0, $noreg, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
+  ; CHECK:   renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 0, $noreg, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
+  ; CHECK:   renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, $noreg, undef renamable $q3
+  ; CHECK:   renamable $q3 = MVE_VMLAS_qr_u32 killed renamable $q3, killed renamable $q2, renamable $r5, 0, $noreg, $noreg
+  ; CHECK:   renamable $q2 = MVE_VMAXu32 killed renamable $q3, renamable $q1, 0, $noreg, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q3 = MVE_VMINu32 renamable $q2, renamable $q0, 0, $noreg, $noreg, undef renamable $q3
+  ; CHECK:   renamable $r12 = MVE_VMLADAVas32 killed renamable $r12, killed renamable $q3, killed renamable $q2, 0, killed $noreg, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   liveins: $r12
@@ -194,27 +194,27 @@ body:             |
 
     renamable $r5 = tLDRspi $sp, 4, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.5, align 8)
     $r6, $r12 = t2LDRDi8 $sp, 28, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.2), (load (s32) from %fixed-stack.1, align 8)
-    renamable $q0 = MVE_VDUP32 killed renamable $r12, 0, $noreg, undef renamable $q0
-    renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VDUP32 killed renamable $r12, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VDUP32 killed renamable $r6, 0, $noreg, $noreg, undef renamable $q1
     renamable $r12 = t2MOVi 0, 14 /* CC::al */, $noreg, $noreg
 
   bb.2.for.body:
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $q1, $r0, $r1, $r2, $r3, $r4, $r5, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r4, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr :: (load (s128) from %ir.input_2_cast, align 4)
+    renamable $r1, renamable $q2 = MVE_VLDRWU32_post killed renamable $r1, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_2_cast, align 4)
     MVE_VPST 8, implicit $vpr
-    renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr :: (load (s128) from %ir.input_1_cast, align 4)
-    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, undef renamable $q2
-    renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, undef renamable $q3
+    renamable $r0, renamable $q3 = MVE_VLDRWU32_post killed renamable $r0, 4, 1, renamable $vpr, $noreg :: (load (s128) from %ir.input_1_cast, align 4)
+    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q2, renamable $r3, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q3 = MVE_VADD_qr_i32 killed renamable $q3, renamable $r2, 0, $noreg, $noreg, undef renamable $q3
     renamable $r4, dead $cpsr = tSUBi8 killed renamable $r4, 4, 14 /* CC::al */, $noreg
-    renamable $q3 = MVE_VMLAS_qr_u32 killed renamable $q3, killed renamable $q2, renamable $r5, 0, $noreg
+    renamable $q3 = MVE_VMLAS_qr_u32 killed renamable $q3, killed renamable $q2, renamable $r5, 0, $noreg, $noreg
     MVE_VPST 2, implicit $vpr
-    renamable $q2 = MVE_VMAXu32 killed renamable $q3, renamable $q1, 1, renamable $vpr, undef renamable $q2
-    renamable $q3 = MVE_VMINu32 renamable $q2, renamable $q0, 1, renamable $vpr, undef renamable $q3
-    renamable $r12 = MVE_VMLADAVas32 killed renamable $r12, killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr
+    renamable $q2 = MVE_VMAXu32 killed renamable $q3, renamable $q1, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q3 = MVE_VMINu32 renamable $q2, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q3
+    renamable $r12 = MVE_VMLADAVas32 killed renamable $r12, killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-blocks.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-blocks.mir
index b11a55384952e..a3d28cfdd1d65 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-blocks.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-blocks.mir
@@ -225,16 +225,16 @@ body:             |
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tRSB renamable $r2, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r2, $r3
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 0, killed $noreg, $noreg
   ; CHECK:   MVE_VPTv4s32r 4, renamable $q1, renamable $r2, 11, implicit-def $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -255,7 +255,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -267,13 +267,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
     MVE_VPTv4s32r 2, renamable $q1, renamable $r2, 11, implicit-def $vpr
-    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -343,7 +343,7 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -352,14 +352,14 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, renamable $vpr
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, renamable $vpr, $noreg
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
   ; CHECK:   MVE_VPTv4s32r 2, renamable $q1, renamable $r2, 11, implicit-def $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
@@ -385,7 +385,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -397,14 +397,14 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, renamable $vpr
-    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+    renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, renamable $vpr, $noreg
+    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
     MVE_VPTv4s32r 2, renamable $q1, renamable $r2, 11, implicit-def $vpr
-    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -474,7 +474,7 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -483,13 +483,13 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
   ; CHECK:   MVE_VPTv4s32r 2, renamable $q1, renamable $r2, 11, implicit-def $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
@@ -514,7 +514,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -526,13 +526,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
     MVE_VPTv4s32r 2, renamable $q1, renamable $r2, 11, implicit-def $vpr
-    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -601,16 +601,16 @@ body:             |
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tRSB renamable $r2, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r2, $r3
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 0, killed $noreg, $noreg
   ; CHECK:   MVE_VPTv4s32r 12, renamable $q1, renamable $r2, 10, implicit-def $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 13, 1, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 2, killed renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 13, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 2, killed renamable $vpr, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -634,7 +634,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -646,13 +646,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
     MVE_VPTv4s32r 14, renamable $q1, renamable $r2, 10, implicit-def $vpr
-    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 13, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r1, 2, killed renamable $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 2, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 13, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 2, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 2, killed renamable $vpr, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -721,16 +721,16 @@ body:             |
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tRSB renamable $r2, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r2, $r3
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 0, killed $noreg
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 0, killed $noreg, $noreg
   ; CHECK:   MVE_VPTv4s32r 4, renamable $q0, renamable $r2, 11, implicit-def $vpr
-  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+  ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -750,7 +750,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -762,13 +762,13 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr
+    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, killed renamable $vpr, $noreg
     MVE_VPTv4s32r 2, renamable $q0, renamable $r2, 11, implicit-def $vpr
-    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r killed renamable $q1, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VSTRWU32_post renamable $q0, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -837,14 +837,14 @@ body:             |
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r1, $r2
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3, dead $cpsr = nsw tRSB renamable $r2, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r2, $r3
   ; CHECK:   MVE_VPTv4s32r 8, renamable $q0, renamable $r2, 8, implicit-def $vpr
-  ; CHECK:   dead renamable $vpr = MVE_VCMPs32r renamable $q0, renamable $r3, 12, 1, killed renamable $vpr
+  ; CHECK:   dead renamable $vpr = MVE_VCMPs32r renamable $q0, renamable $r3, 12, 1, killed renamable $vpr, $noreg
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -864,7 +864,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -877,8 +877,8 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
     MVE_VPTv4s32r 8, renamable $q0, renamable $r2, 8, implicit-def $vpr
-    renamable $vpr = MVE_VCMPs32r killed renamable $q0, renamable $r3, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg
+    renamable $vpr = MVE_VCMPs32r killed renamable $q0, renamable $r3, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 0, $noreg, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -948,7 +948,7 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -958,9 +958,9 @@ body:             |
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2
   ; CHECK:   MVE_VPTv4s32r 2, killed renamable $q0, renamable $r2, 2, implicit-def $vpr
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 renamable $r0, 0, 1, $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed $vpr
-  ; CHECK:   MVE_VSTRWU32 renamable $q0, renamable $r0, 0, 1, killed $vpr
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 renamable $r0, 0, 1, $vpr, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r1, 1, killed $vpr, $noreg
+  ; CHECK:   MVE_VSTRWU32 renamable $q0, renamable $r0, 0, 1, killed $vpr, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
@@ -984,7 +984,7 @@ body:             |
     liveins: $r0, $r1, $r2, $r7, $lr
 
     renamable $r3, dead $cpsr = tADDi3 renamable $r1, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -997,9 +997,9 @@ body:             |
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
     MVE_VPTv4s32r 2, renamable $q0, renamable $r2, 2, implicit-def $vpr
-    renamable $q0 = MVE_VLDRWU32 renamable $r0, 0, 1, $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r1, 1, $vpr
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, $vpr
+    renamable $q0 = MVE_VLDRWU32 renamable $r0, 0, 1, $vpr, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r1, 1, $vpr, $noreg
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r0, 0, 1, $vpr, $noreg
     renamable $r1, dead $cpsr = tSUBi8 killed renamable $r1, 4, 14 /* CC::al */, $noreg
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
@@ -1043,7 +1043,7 @@ body:             |
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   tCMPi8 killed renamable $r2, 0, 14 /* CC::al */, $noreg, implicit-def $cpsr
   ; CHECK:   renamable $r2 = t2CSINC $zr, $zr, 0, implicit killed $cpsr
-  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   renamable $r12 = t2ANDri killed renamable $r2, 1, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r2 = t2RSBri killed renamable $r12, 0, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   $vpr = VMSR_P0 killed $r2, 14 /* CC::al */, $noreg
@@ -1054,7 +1054,7 @@ body:             |
   ; CHECK:   liveins: $lr, $q0, $r1
   ; CHECK:   renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr :: (store (s128), align 4)
+  ; CHECK:   renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (store (s128), align 4)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3:
   ; CHECK:   $sp = frame-destroy tADDspi $sp, 1, 14 /* CC::al */, $noreg
@@ -1079,7 +1079,7 @@ body:             |
     tCMPi8 killed renamable $r2, 0, 14 /* CC::al */, $noreg, implicit-def $cpsr
     renamable $r2 = t2CSINC $zr, $zr, 0, implicit killed $cpsr
     renamable $r3, dead $cpsr = tADDi3 renamable $r0, 3, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
     renamable $r12 = t2ANDri killed renamable $r2, 1, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
@@ -1096,10 +1096,10 @@ body:             |
 
     renamable $vpr = VLDR_P0_off $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.0)
     MVE_VPST 8, implicit $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r0, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCTP32 renamable $r0, 1, killed renamable $vpr, $noreg
     renamable $r0, dead $cpsr = tSUBi8 killed renamable $r0, 4, 14 /* CC::al */, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr :: (store (s128), align 4)
+    renamable $r1 = MVE_VSTRWU32_post renamable $q0, killed renamable $r1, 16, 1, killed renamable $vpr, $noreg :: (store (s128), align 4)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -1144,16 +1144,16 @@ body:             |
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $r0, $r1, $r2
   ; CHECK:   renamable $r12 = t2LEApcrel %const.0, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg :: (load (s128) from constant-pool, align 8)
-  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool, align 8)
+  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   $lr = MVE_DLSTP_32 killed renamable $r2
   ; CHECK: bb.2 (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $q2, $r0, $r1
   ; CHECK:   MVE_VPTv4s32r 8, renamable $q0, renamable $r1, 11, implicit-def $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128), align 4)
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128), align 4)
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -1178,24 +1178,24 @@ body:             |
     renamable $r12 = t2LEApcrel %const.0, 14 /* CC::al */, $noreg
     renamable $r3, dead $cpsr = nuw tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg :: (load (s128) from constant-pool, align 8)
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool, align 8)
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q2
     renamable $lr = t2DoLoopStartTP killed renamable $lr, renamable $r2
 
   bb.2 (align 4):
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $q1, $q2, $r0, $r1, $r2
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $vpr = MVE_VCMPs32r renamable $q0, renamable $r1, 11, 1, killed renamable $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128), align 4)
+    renamable $vpr = MVE_VCMPs32r renamable $q0, renamable $r1, 11, 1, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128), align 4)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1, implicit-def dead $cpsr
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg
@@ -1245,21 +1245,21 @@ body:             |
   ; CHECK:   renamable $r12 = t2LEApcrel %const.0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = nuw tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg :: (load (s128) from constant-pool, align 8)
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool, align 8)
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK: bb.2 (align 4):
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $q1, $q2, $r0, $r1, $r2
   ; CHECK:   MVE_VPTv4s32r 8, renamable $q0, renamable $r1, 11, implicit-def $vpr
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed $vpr
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 1, killed $vpr, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128), align 4)
+  ; CHECK:   renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128), align 4)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3:
   ; CHECK:   frame-destroy tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc
@@ -1284,12 +1284,12 @@ body:             |
     renamable $r12 = t2LEApcrel %const.0, 14 /* CC::al */, $noreg
     renamable $r3, dead $cpsr = nuw tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
     renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg :: (load (s128) from constant-pool, align 8)
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r12, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool, align 8)
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, killed renamable $r12, 19, 14 /* CC::al */, $noreg, $noreg
-    renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VMOVimmi32 4, 0, $noreg, $noreg, undef renamable $q2
     renamable $lr = t2DoLoopStartTP killed renamable $lr, renamable $r2
 
   bb.2 (align 4):
@@ -1297,11 +1297,11 @@ body:             |
     liveins: $lr, $q0, $q1, $q2, $r0, $r1, $r2
 
     MVE_VPTv4s32r 8, renamable $q0, renamable $r1, 11, implicit-def $vpr
-    renamable $vpr = MVE_VCTP32 renamable $r2, 1, $vpr
+    renamable $vpr = MVE_VCTP32 renamable $r2, 1, $vpr, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr :: (store (s128), align 4)
+    renamable $r0 = MVE_VSTRWU32_post renamable $q1, killed renamable $r0, 16, 1, killed renamable $vpr, $noreg :: (store (s128), align 4)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VADDi32 killed renamable $q0, renamable $q2, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1, implicit-def dead $cpsr
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wls-search-pred.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wls-search-pred.mir
index 833abbb50b5dd..20ec60d622619 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wls-search-pred.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wls-search-pred.mir
@@ -59,7 +59,7 @@ body:             |
   ; CHECK: bb.1.prehead:
   ; CHECK:   successors: %bb.3(0x40000000), %bb.2(0x40000000)
   ; CHECK:   [[DEF:%[0-9]+]]:mqpr = IMPLICIT_DEF
-  ; CHECK:   [[MVE_VMOVimmi32_:%[0-9]+]]:mqpr = MVE_VMOVimmi32 0, 0, $noreg, [[DEF]]
+  ; CHECK:   [[MVE_VMOVimmi32_:%[0-9]+]]:mqpr = MVE_VMOVimmi32 0, 0, $noreg, $noreg, [[DEF]]
   ; CHECK:   [[t2ADDri:%[0-9]+]]:rgpr = t2ADDri [[COPY]], 15, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   [[t2BICri:%[0-9]+]]:rgpr = t2BICri killed [[t2ADDri]], 16, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   [[t2LSRri:%[0-9]+]]:gprlr = t2LSRri killed [[t2BICri]], 4, 14 /* CC::al */, $noreg, $noreg
@@ -69,9 +69,9 @@ body:             |
   ; CHECK:   [[PHI:%[0-9]+]]:rgpr = PHI [[COPY2]], %bb.1, %11, %bb.2
   ; CHECK:   [[PHI1:%[0-9]+]]:gprlr = PHI [[t2WhileLoopStartTP]], %bb.1, %13, %bb.2
   ; CHECK:   [[PHI2:%[0-9]+]]:rgpr = PHI [[COPY]], %bb.1, %15, %bb.2
-  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI2]], 0, $noreg
+  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI2]], 0, $noreg, $noreg
   ; CHECK:   [[t2SUBri:%[0-9]+]]:rgpr = t2SUBri [[PHI2]], 16, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[MVE_VMOVimmi32_]], [[PHI]], 16, 1, [[MVE_VCTP8_]]
+  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[MVE_VMOVimmi32_]], [[PHI]], 16, 1, [[MVE_VCTP8_]], [[PHI1]]
   ; CHECK:   [[t2LoopEndDec:%[0-9]+]]:gprlr = t2LoopEndDec [[PHI1]], %bb.2, implicit-def $cpsr
   ; CHECK:   t2B %bb.3, 14 /* CC::al */, $noreg
   ; CHECK: bb.3.prehead:
@@ -105,7 +105,7 @@ body:             |
     successors: %bb.5(0x40000000), %bb.4(0x40000000)
 
     %12:mqpr = IMPLICIT_DEF
-    %11:mqpr = MVE_VMOVimmi32 0, 0, $noreg, %12
+    %11:mqpr = MVE_VMOVimmi32 0, 0, $noreg, $noreg, %12
     %17:rgpr = t2ADDri %9, 15, 14 /* CC::al */, $noreg, $noreg
     %18:rgpr = t2BICri killed %17, 16, 14 /* CC::al */, $noreg, $noreg
     %19:gprlr = t2LSRri killed %18, 4, 14 /* CC::al */, $noreg, $noreg
@@ -118,9 +118,9 @@ body:             |
     %21:rgpr = PHI %7, %bb.1, %22, %bb.4
     %23:gprlr = PHI %20, %bb.1, %24, %bb.4
     %25:rgpr = PHI %9, %bb.1, %26, %bb.4
-    %27:vccr = MVE_VCTP8 %25, 0, $noreg
+    %27:vccr = MVE_VCTP8 %25, 0, $noreg, $noreg
     %26:rgpr = t2SUBri %25, 16, 14 /* CC::al */, $noreg, $noreg
-    %22:rgpr = MVE_VSTRBU8_post %11, %21, 16, 1, %27
+    %22:rgpr = MVE_VSTRBU8_post %11, %21, 16, 1, %27, $noreg
     %24:gprlr = t2LoopDec %23, 1
     t2LoopEnd %24, %bb.4, implicit-def $cpsr
     t2B %bb.5, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wlstp.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wlstp.mir
index 128710c0462d8..b5998e3be0823 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wlstp.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wlstp.mir
@@ -205,13 +205,13 @@ body:             |
   ; CHECK:   successors: %bb.3(0x04000000), %bb.2(0x7c000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r12
   ; CHECK:   renamable $r4 = t2ADDrr renamable $r1, renamable $r12, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRBU8 killed renamable $r4, 0, 0, $noreg :: (load (s128) from %ir.scevgep45, align 1)
+  ; CHECK:   renamable $q0 = MVE_VLDRBU8 killed renamable $r4, 0, 0, $noreg, $noreg :: (load (s128) from %ir.scevgep45, align 1)
   ; CHECK:   renamable $r4 = t2ADDrr renamable $r2, renamable $r12, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q1 = MVE_VLDRBU8 killed renamable $r4, 0, 0, $noreg :: (load (s128) from %ir.scevgep23, align 1)
+  ; CHECK:   renamable $q1 = MVE_VLDRBU8 killed renamable $r4, 0, 0, $noreg, $noreg :: (load (s128) from %ir.scevgep23, align 1)
   ; CHECK:   renamable $r4 = t2ADDrr renamable $r0, renamable $r12, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2ADDri killed renamable $r12, 16, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VMULi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRBU8 killed renamable $q0, killed renamable $r4, 0, 0, killed $noreg :: (store (s128) into %ir.scevgep1, align 1)
+  ; CHECK:   renamable $q0 = MVE_VMULi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRBU8 killed renamable $q0, killed renamable $r4, 0, 0, killed $noreg, $noreg :: (store (s128) into %ir.scevgep1, align 1)
   ; CHECK:   $lr = MVE_LETP killed renamable $lr, %bb.2
   ; CHECK: bb.3.for.cond.cleanup:
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r4, def $pc
@@ -242,18 +242,18 @@ body:             |
     liveins: $lr, $r0, $r1, $r2, $r3, $r12
 
     renamable $r4 = t2ADDrr renamable $r1, renamable $r12, 14, $noreg, $noreg
-    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP8 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q0 = MVE_VLDRBU8 killed renamable $r4, 0, 1, renamable $vpr :: (load (s128) from %ir.scevgep45, align 1)
+    renamable $q0 = MVE_VLDRBU8 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.scevgep45, align 1)
     renamable $r4 = t2ADDrr renamable $r2, renamable $r12, 14, $noreg, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = MVE_VLDRBU8 killed renamable $r4, 0, 1, renamable $vpr :: (load (s128) from %ir.scevgep23, align 1)
+    renamable $q1 = MVE_VLDRBU8 killed renamable $r4, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.scevgep23, align 1)
     renamable $r4 = t2ADDrr renamable $r0, renamable $r12, 14, $noreg, $noreg
     renamable $r12 = t2ADDri killed renamable $r12, 16, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 16, 14, $noreg
-    renamable $q0 = MVE_VMULi8 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VMULi8 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRBU8 killed renamable $q0, killed renamable $r4, 0, 1, killed renamable $vpr :: (store (s128) into %ir.scevgep1, align 1)
+    MVE_VSTRBU8 killed renamable $q0, killed renamable $r4, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.scevgep1, align 1)
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -322,10 +322,10 @@ body:             |
   ; CHECK: bb.1.vector.body:
   ; CHECK:   successors: %bb.2(0x04000000), %bb.1(0x7c000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $q0 = MVE_VLDRHU16 renamable $r1, 0, 0, $noreg :: (load (s128) from %ir.lsr.iv57, align 2)
-  ; CHECK:   renamable $q1 = MVE_VLDRHU16 renamable $r2, 0, 0, $noreg :: (load (s128) from %ir.lsr.iv24, align 2)
-  ; CHECK:   renamable $q0 = MVE_VMULi16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRHU16 killed renamable $q0, renamable $r0, 0, 0, killed $noreg :: (store (s128) into %ir.lsr.iv1, align 2)
+  ; CHECK:   renamable $q0 = MVE_VLDRHU16 renamable $r1, 0, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv57, align 2)
+  ; CHECK:   renamable $q1 = MVE_VLDRHU16 renamable $r2, 0, 0, $noreg, $noreg :: (load (s128) from %ir.lsr.iv24, align 2)
+  ; CHECK:   renamable $q0 = MVE_VMULi16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRHU16 killed renamable $q0, renamable $r0, 0, 0, killed $noreg, $noreg :: (store (s128) into %ir.lsr.iv1, align 2)
   ; CHECK:   renamable $r1, dead $cpsr = tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tADDi8 killed renamable $r2, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r0, dead $cpsr = tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
@@ -352,13 +352,13 @@ body:             |
     successors: %bb.2(0x04000000), %bb.1(0x7c000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg
+    renamable $vpr = MVE_VCTP16 renamable $r3, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $q0 = MVE_VLDRHU16 renamable $r1, 0, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv57, align 2)
-    renamable $q1 = MVE_VLDRHU16 renamable $r2, 0, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 2)
-    renamable $q0 = MVE_VMULi16 killed renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VLDRHU16 renamable $r1, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv57, align 2)
+    renamable $q1 = MVE_VLDRHU16 renamable $r2, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 2)
+    renamable $q0 = MVE_VMULi16 killed renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     MVE_VPST 8, implicit $vpr
-    MVE_VSTRHU16 killed renamable $q0, renamable $r0, 0, 1, killed renamable $vpr :: (store (s128) into %ir.lsr.iv1, align 2)
+    MVE_VSTRHU16 killed renamable $q0, renamable $r0, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.lsr.iv1, align 2)
     renamable $r1, dead $cpsr = tADDi8 killed renamable $r1, 16, 14, $noreg
     renamable $r2, dead $cpsr = tADDi8 killed renamable $r2, 16, 14, $noreg
     renamable $r0, dead $cpsr = tADDi8 killed renamable $r0, 16, 14, $noreg
@@ -436,29 +436,29 @@ body:             |
   ; CHECK: bb.1.vector.ph:
   ; CHECK:   successors: %bb.2(0x80000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.3(0x04000000), %bb.2(0x7c000000)
   ; CHECK:   liveins: $lr, $q1, $r0, $r1, $r2
-  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
-  ; CHECK:   renamable $vpr = MVE_VCTP32 $r2, 0, $noreg
+  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
+  ; CHECK:   renamable $vpr = MVE_VCTP32 $r2, 0, $noreg, $noreg
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-  ; CHECK:   renamable $q2 = MVE_VLDRWU32 renamable $r1, 0, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
+  ; CHECK:   renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+  ; CHECK:   renamable $q2 = MVE_VLDRWU32 renamable $r1, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
   ; CHECK:   $r3 = tMOVr $r2, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r0, dead $cpsr = tADDi8 killed renamable $r0, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r1, dead $cpsr = tADDi8 killed renamable $r1, 16, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed $r2, 4, 14 /* CC::al */, $noreg
   ; CHECK:   MVE_VPST 8, implicit $vpr
-  ; CHECK:   renamable $q1 = nsw MVE_VADDi32 killed renamable $q1, renamable $q0, 1, killed renamable $vpr, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VADDi32 killed renamable $q1, renamable $q0, 1, killed renamable $vpr, $noreg, undef renamable $q1
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   successors: %bb.4(0x80000000)
   ; CHECK:   liveins: $q0, $q1, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK: bb.4.for.cond.cleanup:
   ; CHECK:   liveins: $r12
   ; CHECK:   $r0 = tMOVr killed $r12, 14 /* CC::al */, $noreg
@@ -484,24 +484,24 @@ body:             |
     successors: %bb.2(0x80000000)
     liveins: $lr, $r0, $r1, $r2
 
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
 
   bb.2.vector.body:
     successors: %bb.3(0x04000000), %bb.2(0x7c000000)
     liveins: $lr, $q1, $r0, $r1, $r2
 
-    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, undef $q0
-    renamable $vpr = MVE_VCTP32 $r2, 0, $noreg
+    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, $noreg, undef $q0
+    renamable $vpr = MVE_VCTP32 $r2, 0, $noreg, $noreg
     MVE_VPST 4, implicit $vpr
-    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv24, align 4)
-    renamable $q2 = MVE_VLDRWU32 renamable $r1, 0, 1, renamable $vpr :: (load (s128) from %ir.lsr.iv1, align 4)
+    renamable $q1 = MVE_VLDRWU32 renamable $r0, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv24, align 4)
+    renamable $q2 = MVE_VLDRWU32 renamable $r1, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.lsr.iv1, align 4)
     $r3 = tMOVr $r2, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r0, dead $cpsr = tADDi8 killed renamable $r0, 16, 14, $noreg
     renamable $r1, dead $cpsr = tADDi8 killed renamable $r1, 16, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed $r2, 4, 14, $noreg
     MVE_VPST 8, implicit $vpr
-    renamable $q1 = nsw MVE_VADDi32 killed renamable $q1, renamable $q0, 1, renamable $vpr, undef renamable $q1
+    renamable $q1 = nsw MVE_VADDi32 killed renamable $q1, renamable $q0, 1, renamable $vpr, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -510,9 +510,9 @@ body:             |
     successors: %bb.4(0x80000000)
     liveins: $q0, $q1, $r3
 
-    renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCTP32 killed renamable $r3, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+    renamable $r12 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
 
   bb.4.for.cond.cleanup:
     liveins: $r12

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-liveout-lsr-shift.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-liveout-lsr-shift.mir
index 33368fe14b1cb..6d4c6444dd778 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-liveout-lsr-shift.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-liveout-lsr-shift.mir
@@ -135,25 +135,25 @@ body:             |
   ; CHECK:   renamable $lr = nuw nsw t2ADDrs killed renamable $r3, renamable $r12, 27, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3 = tLEApcrel %const.0, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r12 = t2LSRri killed renamable $r12, 2, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+  ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
   ; CHECK:   renamable $r3 = t2SUBrs renamable $r2, killed renamable $r12, 26, 14 /* CC::al */, $noreg, $noreg
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $q0, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
-  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, undef $q1
+  ; CHECK:   renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   $q1 = MVE_VORR killed $q0, killed $q0, 0, $noreg, $noreg, undef $q1
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv19, align 1)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv2022, align 1)
+  ; CHECK:   renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv19, align 1)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv2022, align 1)
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
-  ; CHECK:   renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
   ; CHECK:   $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0, $q1, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   $sp = t2LDMIA_UPD $sp, 14 /* CC::al */, $noreg, def $r7, def $lr
   ; CHECK:   renamable $r0 = tSXTH killed renamable $r0, 14 /* CC::al */, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit killed $r0
@@ -186,7 +186,7 @@ body:             |
     renamable $lr = nuw nsw t2ADDrs killed renamable $r3, renamable $r12, 27, 14, $noreg, $noreg
     renamable $r3 = tLEApcrel %const.0, 14, $noreg
     renamable $r12 = t2LSRri killed renamable $r12, 2, 14, $noreg, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg :: (load (s128) from constant-pool)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r3, 0, 0, $noreg, $noreg :: (load (s128) from constant-pool)
     renamable $r3 = t2SUBrs renamable $r2, killed renamable $r12, 26, 14, $noreg, $noreg
     $lr = t2DoLoopStart renamable $lr
 
@@ -194,24 +194,24 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $lr, $q0, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg
-    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, undef $q1
+    renamable $vpr = MVE_VCTP16 renamable $r2, 0, $noreg, $noreg
+    $q1 = MVE_VORR killed $q0, $q0, 0, $noreg, $noreg, undef $q1
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv19, align 1)
-    renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv2022, align 1)
+    renamable $r0, renamable $q0 = MVE_VLDRBU16_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv19, align 1)
+    renamable $r1, renamable $q2 = MVE_VLDRBU16_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv2022, align 1)
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 8, 14, $noreg
-    renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = nuw MVE_VMULi16 killed renamable $q2, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     renamable $lr = t2LoopDec killed renamable $lr, 1
-    renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, undef renamable $q0
+    renamable $q0 = MVE_VSUBi16 renamable $q1, killed renamable $q0, 0, $noreg, $noreg, undef renamable $q0
     t2LoopEnd renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
 
   bb.3.middle.block:
     liveins: $q0, $q1, $r3
 
-    renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
-    renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCTP16 killed renamable $r3, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VADDVu16no_acc killed renamable $q0, 0, $noreg, $noreg
     $sp = t2LDMIA_UPD $sp, 14, $noreg, def $r7, def $lr
     renamable $r0 = tSXTH killed renamable $r0, 14, $noreg
     tBX_RET 14, $noreg, implicit killed $r0

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-opcode-liveout.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-opcode-liveout.mir
index c1a08f98f71bc..76b08a6418810 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-opcode-liveout.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-opcode-liveout.mir
@@ -128,7 +128,7 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -139,25 +139,25 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q1, $r0, $r1, $r2, $r3, $r12
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
-  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
+  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
   ; CHECK:   $lr = tMOVr $r12, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r12 = nsw t2SUBri killed $r12, 1, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0, $q1, $r2, $r3
   ; CHECK:   renamable $r0, dead $cpsr = tSUBi3 killed renamable $r2, 1, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q2 = MVE_VDUP32 killed renamable $r0, 0, $noreg, undef renamable $q2
+  ; CHECK:   renamable $q2 = MVE_VDUP32 killed renamable $r0, 0, $noreg, $noreg, undef renamable $q2
   ; CHECK:   renamable $r0, dead $cpsr = tADDi3 killed renamable $r3, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $vpr = MVE_VCMPu32r killed renamable $q2, killed renamable $r0, 8, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCMPu32r killed renamable $q2, killed renamable $r0, 8, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x80000000)
@@ -177,7 +177,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r7, -8
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3 = t2BICri killed renamable $r3, 3, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14, $noreg
@@ -190,16 +190,16 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q1, $r0, $r1, $r2, $r3, $r12
 
-    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg
-    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, undef $q0
+    renamable $vpr = MVE_VCTP32 renamable $r3, 0, $noreg, $noreg
+    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, $noreg, undef $q0
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r12, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r12 = nsw t2SUBri killed $r12, 1, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tSUBi8 killed renamable $r3, 4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -208,11 +208,11 @@ body:             |
     liveins: $q0, $q1, $r2, $r3
 
     renamable $r0, dead $cpsr = tSUBi3 killed renamable $r2, 1, 14, $noreg
-    renamable $q2 = MVE_VDUP32 killed renamable $r0, 0, $noreg, undef renamable $q2
+    renamable $q2 = MVE_VDUP32 killed renamable $r0, 0, $noreg, $noreg, undef renamable $q2
     renamable $r0, dead $cpsr = tADDi3 killed renamable $r3, 4, 14, $noreg
-    renamable $vpr = MVE_VCMPu32r killed renamable $q2, killed renamable $r0, 8, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCMPu32r killed renamable $q2, killed renamable $r0, 8, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-operand-liveout.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-operand-liveout.mir
index 95d48f2eecab5..ae8870034e91e 100644
--- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-operand-liveout.mir
+++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/wrong-vctp-operand-liveout.mir
@@ -120,7 +120,7 @@ body:             |
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $lr, -4
   ; CHECK:   frame-setup CFI_INSTRUCTION offset $r7, -8
   ; CHECK:   renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r3 = t2BICri killed renamable $r3, 3, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r12 = t2SUBri killed renamable $r3, 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   renamable $r3, dead $cpsr = tMOVi8 1, 14 /* CC::al */, $noreg
@@ -130,22 +130,22 @@ body:             |
   ; CHECK: bb.2.vector.body:
   ; CHECK:   successors: %bb.2(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $q1, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
-  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+  ; CHECK:   renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
   ; CHECK:   MVE_VPST 4, implicit $vpr
-  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+  ; CHECK:   renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+  ; CHECK:   renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
   ; CHECK:   $lr = tMOVr $r3, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14 /* CC::al */, $noreg
   ; CHECK:   renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14 /* CC::al */, $noreg
-  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+  ; CHECK:   renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
   ; CHECK:   dead $lr = t2LEUpdate killed renamable $lr, %bb.2
   ; CHECK: bb.3.middle.block:
   ; CHECK:   liveins: $q0, $q1, $r2
-  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r2, 0, $noreg
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VCTP32 killed renamable $r2, 0, $noreg, $noreg
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+  ; CHECK:   renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   tPOP_RET 14 /* CC::al */, $noreg, def $r7, def $pc, implicit killed $r0
   bb.0.entry:
     successors: %bb.1(0x80000000)
@@ -165,7 +165,7 @@ body:             |
     frame-setup CFI_INSTRUCTION offset $lr, -4
     frame-setup CFI_INSTRUCTION offset $r7, -8
     renamable $r3, dead $cpsr = tADDi3 renamable $r2, 3, 14, $noreg
-    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3 = t2BICri killed renamable $r3, 3, 14, $noreg, $noreg
     renamable $r12 = t2SUBri killed renamable $r3, 4, 14, $noreg, $noreg
     renamable $r3, dead $cpsr = tMOVi8 1, 14, $noreg
@@ -177,16 +177,16 @@ body:             |
     successors: %bb.2(0x7c000000), %bb.3(0x04000000)
     liveins: $q1, $r0, $r1, $r2, $r3
 
-    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg
-    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, undef $q0
+    renamable $vpr = MVE_VCTP32 renamable $r2, 0, $noreg, $noreg
+    $q0 = MVE_VORR killed $q1, $q1, 0, $noreg, $noreg, undef $q0
     MVE_VPST 4, implicit $vpr
-    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr :: (load (s64) from %ir.lsr.iv17, align 2)
-    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr :: (load (s64) from %ir.lsr.iv1820, align 2)
+    renamable $r0, renamable $q1 = MVE_VLDRHS32_post killed renamable $r0, 8, 1, renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv17, align 2)
+    renamable $r1, renamable $q2 = MVE_VLDRHS32_post killed renamable $r1, 8, 1, killed renamable $vpr, $noreg :: (load (s64) from %ir.lsr.iv1820, align 2)
     $lr = tMOVr $r3, 14, $noreg
-    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, undef renamable $q1
+    renamable $q1 = nsw MVE_VMULi32 killed renamable $q2, killed renamable $q1, 0, $noreg, $noreg, undef renamable $q1
     renamable $r3, dead $cpsr = nsw tSUBi8 killed $r3, 1, 14, $noreg
     renamable $r2, dead $cpsr = tSUBi8 killed renamable $r2, 4, 14, $noreg
-    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, undef renamable $q1
+    renamable $q1 = MVE_VADDi32 killed renamable $q1, renamable $q0, 0, $noreg, $noreg, undef renamable $q1
     renamable $lr = t2LoopDec killed renamable $lr, 1
     t2LoopEnd killed renamable $lr, %bb.2, implicit-def dead $cpsr
     tB %bb.3, 14, $noreg
@@ -194,9 +194,9 @@ body:             |
   bb.3.middle.block:
     liveins: $q0, $q1, $r2
 
-    renamable $vpr = MVE_VCTP32 killed renamable $r2, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr
-    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg
+    renamable $vpr = MVE_VCTP32 killed renamable $r2, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q1, killed renamable $q0, 0, killed renamable $vpr, $noreg
+    renamable $r0 = MVE_VADDVu32no_acc killed renamable $q0, 0, $noreg, $noreg
     tPOP_RET 14, $noreg, def $r7, def $pc, implicit killed $r0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-gatherscatter-mmo.ll b/llvm/test/CodeGen/Thumb2/mve-gatherscatter-mmo.ll
index 61bb94365991f..18ab4c38b122f 100644
--- a/llvm/test/CodeGen/Thumb2/mve-gatherscatter-mmo.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-gatherscatter-mmo.ll
@@ -2,7 +2,7 @@
 
 define arm_aapcs_vfpcc <8 x i16> @test_vldrbq_gather_offset_s16(i8* %base, <8 x i16> %offset) {
 ; CHECK-LABEL: name: test_vldrbq_gather_offset_s16
-; CHECK: early-clobber %2:mqpr = MVE_VLDRBS16_rq %0, %1, 0, $noreg :: (load (s64), align 1)
+; CHECK: early-clobber %2:mqpr = MVE_VLDRBS16_rq %0, %1, 0, $noreg, $noreg :: (load (s64), align 1)
 entry:
   %0 = call <8 x i16> @llvm.arm.mve.vldr.gather.offset.v8i16.p0i8.v8i16(i8* %base, <8 x i16> %offset, i32 8, i32 0, i32 0)
   ret <8 x i16> %0
@@ -10,7 +10,7 @@ entry:
 
 define arm_aapcs_vfpcc <4 x i32> @test_vldrbq_gather_offset_z_s32(i8* %base, <4 x i32> %offset, i16 zeroext %p) {
 ; CHECK-LABEL: name: test_vldrbq_gather_offset_z_s32
-; CHECK: early-clobber %4:mqpr = MVE_VLDRBS32_rq %0, %1, 1, killed %3 :: (load (s32), align 1)
+; CHECK: early-clobber %4:mqpr = MVE_VLDRBS32_rq %0, %1, 1, killed %3, $noreg :: (load (s32), align 1)
 entry:
   %0 = zext i16 %p to i32
   %1 = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 %0)
@@ -20,7 +20,7 @@ entry:
 
 define arm_aapcs_vfpcc <2 x i64> @test_vldrdq_gather_base_s64(<2 x i64> %addr) {
 ; CHECK-LABEL: name: test_vldrdq_gather_base_s64
-; CHECK: early-clobber %1:mqpr = MVE_VLDRDU64_qi %0, 616, 0, $noreg :: (load (s128), align 1)
+; CHECK: early-clobber %1:mqpr = MVE_VLDRDU64_qi %0, 616, 0, $noreg, $noreg :: (load (s128), align 1)
 entry:
   %0 = call <2 x i64> @llvm.arm.mve.vldr.gather.base.v2i64.v2i64(<2 x i64> %addr, i32 616)
   ret <2 x i64> %0
@@ -28,7 +28,7 @@ entry:
 
 define arm_aapcs_vfpcc <4 x float> @test_vldrwq_gather_base_z_f32(<4 x i32> %addr, i16 zeroext %p) {
 ; CHECK-LABEL: name: test_vldrwq_gather_base_z_f32
-; CHECK: early-clobber %3:mqpr = MVE_VLDRWU32_qi %0, -300, 1, killed %2 :: (load (s128), align 1)
+; CHECK: early-clobber %3:mqpr = MVE_VLDRWU32_qi %0, -300, 1, killed %2, $noreg :: (load (s128), align 1)
 entry:
   %0 = zext i16 %p to i32
   %1 = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 %0)
@@ -38,7 +38,7 @@ entry:
 
 define arm_aapcs_vfpcc <2 x i64> @test_vldrdq_gather_base_wb_s64(<2 x i64>* %addr) {
 ; CHECK-LABEL: name: test_vldrdq_gather_base_wb_s64
-; CHECK: %2:mqpr, early-clobber %3:mqpr = MVE_VLDRDU64_qi_pre %1, 576, 0, $noreg :: (load (s128), align 1)
+; CHECK: %2:mqpr, early-clobber %3:mqpr = MVE_VLDRDU64_qi_pre %1, 576, 0, $noreg, $noreg :: (load (s128), align 1)
 entry:
   %0 = load <2 x i64>, <2 x i64>* %addr, align 8
   %1 = call { <2 x i64>, <2 x i64> } @llvm.arm.mve.vldr.gather.base.wb.v2i64.v2i64(<2 x i64> %0, i32 576)
@@ -50,7 +50,7 @@ entry:
 
 define arm_aapcs_vfpcc <4 x float> @test_vldrwq_gather_base_wb_z_f32(<4 x i32>* %addr, i16 zeroext %p) {
 ; CHECK-LABEL: name: test_vldrwq_gather_base_wb_z_f32
-; CHECK: %4:mqpr, early-clobber %5:mqpr = MVE_VLDRWU32_qi_pre %3, -352, 1, killed %2 :: (load (s128), align 1)
+; CHECK: %4:mqpr, early-clobber %5:mqpr = MVE_VLDRWU32_qi_pre %3, -352, 1, killed %2, $noreg :: (load (s128), align 1)
 entry:
   %0 = load <4 x i32>, <4 x i32>* %addr, align 8
   %1 = zext i16 %p to i32
@@ -65,7 +65,7 @@ entry:
 
 define arm_aapcs_vfpcc void @test_vstrbq_scatter_offset_s32(i8* %base, <4 x i32> %offset, <4 x i32> %value) {
 ; CHECK-LABEL: name: test_vstrbq_scatter_offset_s32
-; CHECK: MVE_VSTRB32_rq %2, %0, %1, 0, $noreg :: (store (s32), align 1)
+; CHECK: MVE_VSTRB32_rq %2, %0, %1, 0, $noreg, $noreg :: (store (s32), align 1)
 entry:
   call void @llvm.arm.mve.vstr.scatter.offset.p0i8.v4i32.v4i32(i8* %base, <4 x i32> %offset, <4 x i32> %value, i32 8, i32 0)
   ret void
@@ -73,7 +73,7 @@ entry:
 
 define arm_aapcs_vfpcc void @test_vstrbq_scatter_offset_p_s8(i8* %base, <16 x i8> %offset, <16 x i8> %value, i16 zeroext %p) {
 ; CHECK-LABEL: name: test_vstrbq_scatter_offset_p_s8
-; CHECK: MVE_VSTRB8_rq %2, %0, %1, 1, killed %4 :: (store (s128), align 1)
+; CHECK: MVE_VSTRB8_rq %2, %0, %1, 1, killed %4, $noreg :: (store (s128), align 1)
 entry:
   %0 = zext i16 %p to i32
   %1 = call <16 x i1> @llvm.arm.mve.pred.i2v.v16i1(i32 %0)
@@ -83,7 +83,7 @@ entry:
 
 define arm_aapcs_vfpcc void @test_vstrdq_scatter_base_u64(<2 x i64> %addr, <2 x i64> %value) {
 ; CHECK-LABEL: name: test_vstrdq_scatter_base_u64
-; CHECK: MVE_VSTRD64_qi %1, %0, -472, 0, $noreg :: (store (s128), align 1)
+; CHECK: MVE_VSTRD64_qi %1, %0, -472, 0, $noreg, $noreg :: (store (s128), align 1)
 entry:
   call void @llvm.arm.mve.vstr.scatter.base.v2i64.v2i64(<2 x i64> %addr, i32 -472, <2 x i64> %value)
   ret void
@@ -91,7 +91,7 @@ entry:
 
 define arm_aapcs_vfpcc void @test_vstrdq_scatter_base_p_s64(<2 x i64> %addr, <2 x i64> %value, i16 zeroext %p) {
 ; CHECK-LABEL: name: test_vstrdq_scatter_base_p_s64
-; CHECK: MVE_VSTRD64_qi %1, %0, 888, 1, killed %3 :: (store (s128), align 1)
+; CHECK: MVE_VSTRD64_qi %1, %0, 888, 1, killed %3, $noreg :: (store (s128), align 1)
 entry:
   %0 = zext i16 %p to i32
   %1 = call <4 x i1> @llvm.arm.mve.pred.i2v.v4i1(i32 %0)
@@ -101,7 +101,7 @@ entry:
 
 define arm_aapcs_vfpcc void @test_vstrdq_scatter_base_wb_s64(<2 x i64>* %addr, <2 x i64> %value) {
 ; CHECK-LABEL: name: test_vstrdq_scatter_base_wb_s64
-; CHECK: %3:mqpr = MVE_VSTRD64_qi_pre %1, %2, 208, 0, $noreg :: (store (s128), align 1)
+; CHECK: %3:mqpr = MVE_VSTRD64_qi_pre %1, %2, 208, 0, $noreg, $noreg :: (store (s128), align 1)
 entry:
   %0 = load <2 x i64>, <2 x i64>* %addr, align 8
   %1 = call <2 x i64> @llvm.arm.mve.vstr.scatter.base.wb.v2i64.v2i64(<2 x i64> %0, i32 208, <2 x i64> %value)
@@ -111,7 +111,7 @@ entry:
 
 define arm_aapcs_vfpcc void @test_vstrdq_scatter_base_wb_p_s64(<2 x i64>* %addr, <2 x i64> %value, i16 zeroext %p) {
 ; CHECK-LABEL: name: test_vstrdq_scatter_base_wb_p_s64
-; CHECK: %5:mqpr = MVE_VSTRD64_qi_pre %1, %3, 248, 1, killed %4 :: (store (s128), align 1)
+; CHECK: %5:mqpr = MVE_VSTRD64_qi_pre %1, %3, 248, 1, killed %4, $noreg :: (store (s128), align 1)
 entry:
   %0 = load <2 x i64>, <2 x i64>* %addr, align 8
   %1 = zext i16 %p to i32

diff  --git a/llvm/test/CodeGen/Thumb2/mve-postinc-distribute.mir b/llvm/test/CodeGen/Thumb2/mve-postinc-distribute.mir
index f2b0510957eb3..623e28a1af1b2 100644
--- a/llvm/test/CodeGen/Thumb2/mve-postinc-distribute.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-postinc-distribute.mir
@@ -89,11 +89,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRWU32
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -116,11 +116,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHU16
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHU16_post:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post1:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU16_post:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post1:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHU16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRHU16 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHU16 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -143,11 +143,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU8
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU8_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRBU8 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU8 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -170,11 +170,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBS32
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS32_post1:%[0-9]+]]:mqpr = MVE_VLDRBS32_post [[COPY]], 32, 0, $noreg :: (load (s32), align 8)
+    ; CHECK: [[MVE_VLDRBS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS32_post1:%[0-9]+]]:mqpr = MVE_VLDRBS32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s32), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBS32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %1:mqpr = MVE_VLDRBS32 %0, 0, 0, $noreg :: (load (s32), align 8)
+    %1:mqpr = MVE_VLDRBS32 %0, 0, 0, $noreg, $noreg :: (load (s32), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -197,11 +197,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU32
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU32_post1:%[0-9]+]]:mqpr = MVE_VLDRBU32_post [[COPY]], 32, 0, $noreg :: (load (s32), align 8)
+    ; CHECK: [[MVE_VLDRBU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU32_post1:%[0-9]+]]:mqpr = MVE_VLDRBU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s32), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %1:mqpr = MVE_VLDRBU32 %0, 0, 0, $noreg :: (load (s32), align 8)
+    %1:mqpr = MVE_VLDRBU32 %0, 0, 0, $noreg, $noreg :: (load (s32), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -224,11 +224,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHS32
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHS32_post1:%[0-9]+]]:mqpr = MVE_VLDRHS32_post [[COPY]], 32, 0, $noreg :: (load (s64))
+    ; CHECK: [[MVE_VLDRHS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHS32_post1:%[0-9]+]]:mqpr = MVE_VLDRHS32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s64))
     ; CHECK: $r0 = COPY [[MVE_VLDRHS32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %1:mqpr = MVE_VLDRHS32 %0, 0, 0, $noreg :: (load (s64), align 8)
+    %1:mqpr = MVE_VLDRHS32 %0, 0, 0, $noreg, $noreg :: (load (s64), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -251,11 +251,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHU32
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHU32_post1:%[0-9]+]]:mqpr = MVE_VLDRHU32_post [[COPY]], 32, 0, $noreg :: (load (s64))
+    ; CHECK: [[MVE_VLDRHU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHU32_post1:%[0-9]+]]:mqpr = MVE_VLDRHU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s64))
     ; CHECK: $r0 = COPY [[MVE_VLDRHU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %1:mqpr = MVE_VLDRHU32 %0, 0, 0, $noreg :: (load (s64), align 8)
+    %1:mqpr = MVE_VLDRHU32 %0, 0, 0, $noreg, $noreg :: (load (s64), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -278,11 +278,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBS16
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBS16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS16_post1:%[0-9]+]]:mqpr = MVE_VLDRBS16_post [[COPY]], 32, 0, $noreg :: (load (s64))
+    ; CHECK: [[MVE_VLDRBS16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS16_post1:%[0-9]+]]:mqpr = MVE_VLDRBS16_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s64))
     ; CHECK: $r0 = COPY [[MVE_VLDRBS16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %1:mqpr = MVE_VLDRBS16 %0, 0, 0, $noreg :: (load (s64), align 8)
+    %1:mqpr = MVE_VLDRBS16 %0, 0, 0, $noreg, $noreg :: (load (s64), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -305,11 +305,11 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU16
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU16_post1:%[0-9]+]]:mqpr = MVE_VLDRBU16_post [[COPY]], 32, 0, $noreg :: (load (s64))
+    ; CHECK: [[MVE_VLDRBU16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU16_post1:%[0-9]+]]:mqpr = MVE_VLDRBU16_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s64))
     ; CHECK: $r0 = COPY [[MVE_VLDRBU16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %1:mqpr = MVE_VLDRBU16 %0, 0, 0, $noreg :: (load (s64), align 8)
+    %1:mqpr = MVE_VLDRBU16 %0, 0, 0, $noreg, $noreg :: (load (s64), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -333,12 +333,12 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRWU32_post:%[0-9]+]]:rgpr = MVE_VSTRWU32_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRWU32_post:%[0-9]+]]:rgpr = MVE_VSTRWU32_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:gprnopc = COPY $r0
-    MVE_VSTRWU32 %1, %0, 0, 0, $noreg :: (store (s128), align 8)
+    MVE_VSTRWU32 %1, %0, 0, 0, $noreg, $noreg :: (store (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -362,12 +362,12 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRHU16_post:%[0-9]+]]:rgpr = MVE_VSTRHU16_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRHU16_post:%[0-9]+]]:rgpr = MVE_VSTRHU16_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRHU16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:gprnopc = COPY $r0
-    MVE_VSTRHU16 %1, %0, 0, 0, $noreg :: (store (s128), align 8)
+    MVE_VSTRHU16 %1, %0, 0, 0, $noreg, $noreg :: (store (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -391,12 +391,12 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRBU8_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:gprnopc = COPY $r0
-    MVE_VSTRBU8 %1, %0, 0, 0, $noreg :: (store (s128), align 8)
+    MVE_VSTRBU8 %1, %0, 0, 0, $noreg, $noreg :: (store (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -420,12 +420,12 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRH32_post:%[0-9]+]]:tgpr = MVE_VSTRH32_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s64))
+    ; CHECK: [[MVE_VSTRH32_post:%[0-9]+]]:tgpr = MVE_VSTRH32_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s64))
     ; CHECK: $r0 = COPY [[MVE_VSTRH32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    MVE_VSTRH32 %1, %0, 0, 0, $noreg :: (store (s64), align 8)
+    MVE_VSTRH32 %1, %0, 0, 0, $noreg, $noreg :: (store (s64), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -449,12 +449,12 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB32_post:%[0-9]+]]:tgpr = MVE_VSTRB32_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s32), align 8)
+    ; CHECK: [[MVE_VSTRB32_post:%[0-9]+]]:tgpr = MVE_VSTRB32_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s32), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    MVE_VSTRB32 %1, %0, 0, 0, $noreg :: (store (s32), align 8)
+    MVE_VSTRB32 %1, %0, 0, 0, $noreg, $noreg :: (store (s32), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -478,12 +478,12 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_post:%[0-9]+]]:tgpr = MVE_VSTRB16_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s64))
+    ; CHECK: [[MVE_VSTRB16_post:%[0-9]+]]:tgpr = MVE_VSTRB16_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s64))
     ; CHECK: $r0 = COPY [[MVE_VSTRB16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    MVE_VSTRB16 %1, %0, 0, 0, $noreg :: (store (s64), align 8)
+    MVE_VSTRB16 %1, %0, 0, 0, $noreg, $noreg :: (store (s64), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -507,14 +507,14 @@ body:             |
     ; CHECK-LABEL: name: ld0ld4
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -28, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -28, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -537,14 +537,14 @@ body:             |
     ; CHECK-LABEL: name: ld4ld0
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -568,16 +568,16 @@ body:             |
     ; CHECK-LABEL: name: ld0ld4ld0
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
-    %4:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
+    %4:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -601,16 +601,16 @@ body:             |
     ; CHECK-LABEL: name: ld4ld0ld4
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -28, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -28, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
-    %4:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    %4:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -633,14 +633,14 @@ body:             |
     ; CHECK-LABEL: name: addload
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -28, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -28, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -663,14 +663,14 @@ body:             |
     ; CHECK-LABEL: name: sub
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], -32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], 36, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], -32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], 36, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2SUBri %0, 32, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -694,15 +694,15 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:gprnopc = COPY $r0
     ; CHECK: [[t2ADDri:%[0-9]+]]:rgpr = nuw t2ADDri [[COPY]], 32, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[COPY]]
     ; CHECK: $r0 = COPY [[t2ADDri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %0
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
@@ -728,14 +728,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
     ; CHECK: [[t2ADDri:%[0-9]+]]:rgpr = nuw t2ADDri [[COPY]], 32, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[t2ADDri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
     %2:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
-    %4:rgpr, %3:mqpr = MVE_VLDRWU32_post %0, 4, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    %4:rgpr, %3:mqpr = MVE_VLDRWU32_post %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -758,15 +758,15 @@ body:             |
     ; CHECK-LABEL: name: badScale
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:gprnopc = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: [[t2SUBri:%[0-9]+]]:rgpr = nuw t2SUBri [[COPY]], 3, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[t2SUBri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2SUBri %0, 3, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -789,15 +789,15 @@ body:             |
     ; CHECK-LABEL: name: badRange
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:gprnopc = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: [[t2SUBri:%[0-9]+]]:rgpr = nuw t2SUBri [[COPY]], -300, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], -300, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], -300, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[t2SUBri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2SUBri %0, -300, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, -300, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, -300, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -821,15 +821,15 @@ body:             |
     ; CHECK-LABEL: name: addUseOK
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], -32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], 36, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], -32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], 36, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: [[t2LSRri:%[0-9]+]]:rgpr = nuw t2LSRri [[MVE_VLDRWU32_post]], 2, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: $r0 = COPY [[t2LSRri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2SUBri %0, 32, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     %4:rgpr = nuw t2LSRri %2, 2, 14, $noreg, $noreg
     $r0 = COPY %4
     tBX_RET 14, $noreg, implicit $r0
@@ -856,15 +856,15 @@ body:             |
     ; CHECK: [[COPY:%[0-9]+]]:gprnopc = COPY $r0
     ; CHECK: [[t2SUBri:%[0-9]+]]:rgpr = nuw t2SUBri [[COPY]], 32, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: [[t2LSRri:%[0-9]+]]:rgpr = nuw t2LSRri [[t2SUBri]], 2, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_1:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[COPY]], 4, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[t2LSRri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
     %2:rgpr = nuw t2SUBri %0, 32, 14, $noreg, $noreg
     %4:rgpr = nuw t2LSRri %2, 2, 14, $noreg, $noreg
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %4
     tBX_RET 14, $noreg, implicit $r0
 
@@ -888,16 +888,16 @@ body:             |
     ; CHECK-LABEL: name: addUseKilled
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], -32, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], -32, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: [[t2LSRri:%[0-9]+]]:rgpr = nuw t2LSRri [[MVE_VLDRWU32_post]], 2, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], 36, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], 36, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[t2LSRri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:gprnopc = COPY $r0
-    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 0, 0, $noreg, $noreg :: (load (s128), align 8)
     %2:rgpr = nuw t2SUBri %0, 32, 14, $noreg, $noreg
     %4:rgpr = nuw t2LSRri killed %2, 2, 14, $noreg, $noreg
-    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg :: (load (s128), align 8)
+    %3:mqpr = MVE_VLDRWU32 %0, 4, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %4
     tBX_RET 14, $noreg, implicit $r0
 
@@ -919,13 +919,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRWU32_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_post:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_post1:%[0-9]+]]:mqpr = MVE_VLDRWU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
-    %2:rgpr, %1:mqpr = MVE_VLDRWU32_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRWU32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:rgpr, %1:mqpr = MVE_VLDRWU32_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -947,13 +947,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHU16_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHU16_post:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post1:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRHU16_:%[0-9]+]]:mqpr = MVE_VLDRHU16 [[MVE_VLDRHU16_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU16_post:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_post1:%[0-9]+]]:mqpr = MVE_VLDRHU16_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU16_:%[0-9]+]]:mqpr = MVE_VLDRHU16 [[MVE_VLDRHU16_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHU16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
-    %2:rgpr, %1:mqpr = MVE_VLDRHU16_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRHU16 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:rgpr, %1:mqpr = MVE_VLDRHU16_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHU16 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -975,13 +975,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU8_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBU8_:%[0-9]+]]:mqpr = MVE_VLDRBU8 [[MVE_VLDRBU8_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU8_:%[0-9]+]]:mqpr = MVE_VLDRBU8 [[MVE_VLDRBU8_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU8_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
-    %2:rgpr, %1:mqpr = MVE_VLDRBU8_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBU8 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:rgpr, %1:mqpr = MVE_VLDRBU8_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU8 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1003,13 +1003,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBS32_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS32_post1:%[0-9]+]]:mqpr = MVE_VLDRBS32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBS32_:%[0-9]+]]:mqpr = MVE_VLDRBS32 [[MVE_VLDRBS32_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS32_post1:%[0-9]+]]:mqpr = MVE_VLDRBS32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS32_:%[0-9]+]]:mqpr = MVE_VLDRBS32 [[MVE_VLDRBS32_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBS32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBS32_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBS32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBS32_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBS32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1031,13 +1031,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU32_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU32_post1:%[0-9]+]]:mqpr = MVE_VLDRBU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBU32_:%[0-9]+]]:mqpr = MVE_VLDRBU32 [[MVE_VLDRBU32_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU32_post1:%[0-9]+]]:mqpr = MVE_VLDRBU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU32_:%[0-9]+]]:mqpr = MVE_VLDRBU32 [[MVE_VLDRBU32_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBU32_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBU32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBU32_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1059,13 +1059,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHS32_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHS32_post1:%[0-9]+]]:mqpr = MVE_VLDRHS32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRHS32_:%[0-9]+]]:mqpr = MVE_VLDRHS32 [[MVE_VLDRHS32_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHS32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHS32_post1:%[0-9]+]]:mqpr = MVE_VLDRHS32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHS32_:%[0-9]+]]:mqpr = MVE_VLDRHS32 [[MVE_VLDRHS32_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHS32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRHS32_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRHS32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRHS32_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHS32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1087,13 +1087,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHU32_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHU32_post1:%[0-9]+]]:mqpr = MVE_VLDRHU32_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRHU32_:%[0-9]+]]:mqpr = MVE_VLDRHU32 [[MVE_VLDRHU32_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU32_post:%[0-9]+]]:tgpr, [[MVE_VLDRHU32_post1:%[0-9]+]]:mqpr = MVE_VLDRHU32_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU32_:%[0-9]+]]:mqpr = MVE_VLDRHU32 [[MVE_VLDRHU32_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRHU32_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRHU32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRHU32_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHU32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1115,13 +1115,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBS16_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBS16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS16_post1:%[0-9]+]]:mqpr = MVE_VLDRBS16_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBS16_:%[0-9]+]]:mqpr = MVE_VLDRBS16 [[MVE_VLDRBS16_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBS16_post1:%[0-9]+]]:mqpr = MVE_VLDRBS16_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS16_:%[0-9]+]]:mqpr = MVE_VLDRBS16 [[MVE_VLDRBS16_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBS16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBS16_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBS16 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBS16_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBS16 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1143,13 +1143,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU16_post
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU16_post1:%[0-9]+]]:mqpr = MVE_VLDRBU16_post [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBU16_:%[0-9]+]]:mqpr = MVE_VLDRBU16 [[MVE_VLDRBU16_post]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU16_post:%[0-9]+]]:tgpr, [[MVE_VLDRBU16_post1:%[0-9]+]]:mqpr = MVE_VLDRBU16_post [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU16_:%[0-9]+]]:mqpr = MVE_VLDRBU16 [[MVE_VLDRBU16_post]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBU16_post %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBU16 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBU16_post %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU16 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1172,14 +1172,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRWU32_post:%[0-9]+]]:rgpr = MVE_VSTRWU32_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRWU32 [[COPY]], [[MVE_VSTRWU32_post]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRWU32_post:%[0-9]+]]:rgpr = MVE_VSTRWU32_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRWU32 [[COPY]], [[MVE_VSTRWU32_post]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRWU32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:rgpr = COPY $r0
-    %2:rgpr = MVE_VSTRWU32_post %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRWU32 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:rgpr = MVE_VSTRWU32_post %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRWU32 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1202,14 +1202,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRHU16_post:%[0-9]+]]:rgpr = MVE_VSTRHU16_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRHU16 [[COPY]], [[MVE_VSTRHU16_post]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRHU16_post:%[0-9]+]]:rgpr = MVE_VSTRHU16_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRHU16 [[COPY]], [[MVE_VSTRHU16_post]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRHU16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:rgpr = COPY $r0
-    %2:rgpr = MVE_VSTRHU16_post %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRHU16 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:rgpr = MVE_VSTRHU16_post %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRHU16 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1232,14 +1232,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRBU8 [[COPY]], [[MVE_VSTRBU8_post]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRBU8 [[COPY]], [[MVE_VSTRBU8_post]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRBU8_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:rgpr = COPY $r0
-    %2:rgpr = MVE_VSTRBU8_post %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRBU8 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:rgpr = MVE_VSTRBU8_post %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRBU8 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1262,14 +1262,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRH32_post:%[0-9]+]]:tgpr = MVE_VSTRH32_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRH32 [[COPY]], [[MVE_VSTRH32_post]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRH32_post:%[0-9]+]]:tgpr = MVE_VSTRH32_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRH32 [[COPY]], [[MVE_VSTRH32_post]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRH32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRH32_post %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRH32 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRH32_post %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRH32 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1292,14 +1292,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB32_post:%[0-9]+]]:tgpr = MVE_VSTRB32_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB32 [[COPY]], [[MVE_VSTRB32_post]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB32_post:%[0-9]+]]:tgpr = MVE_VSTRB32_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB32 [[COPY]], [[MVE_VSTRB32_post]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB32_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB32_post %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB32 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB32_post %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB32 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1322,14 +1322,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_post:%[0-9]+]]:tgpr = MVE_VSTRB16_post [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_post]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_post:%[0-9]+]]:tgpr = MVE_VSTRB16_post [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_post]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB16_post]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB16_post %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB16_post %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1351,13 +1351,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRWU32_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRWU32_pre:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_pre1:%[0-9]+]]:mqpr = MVE_VLDRWU32_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_pre:%[0-9]+]]:rgpr, [[MVE_VLDRWU32_pre1:%[0-9]+]]:mqpr = MVE_VLDRWU32_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRWU32_:%[0-9]+]]:mqpr = MVE_VLDRWU32 [[MVE_VLDRWU32_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRWU32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
-    %2:rgpr, %1:mqpr = MVE_VLDRWU32_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRWU32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:rgpr, %1:mqpr = MVE_VLDRWU32_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRWU32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1379,13 +1379,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHU16_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHU16_pre:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_pre1:%[0-9]+]]:mqpr = MVE_VLDRHU16_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRHU16_:%[0-9]+]]:mqpr = MVE_VLDRHU16 [[MVE_VLDRHU16_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU16_pre:%[0-9]+]]:rgpr, [[MVE_VLDRHU16_pre1:%[0-9]+]]:mqpr = MVE_VLDRHU16_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU16_:%[0-9]+]]:mqpr = MVE_VLDRHU16 [[MVE_VLDRHU16_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHU16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
-    %2:rgpr, %1:mqpr = MVE_VLDRHU16_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRHU16 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:rgpr, %1:mqpr = MVE_VLDRHU16_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHU16 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1407,13 +1407,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU8_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU8_pre:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_pre1:%[0-9]+]]:mqpr = MVE_VLDRBU8_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBU8_:%[0-9]+]]:mqpr = MVE_VLDRBU8 [[MVE_VLDRBU8_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU8_pre:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_pre1:%[0-9]+]]:mqpr = MVE_VLDRBU8_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU8_:%[0-9]+]]:mqpr = MVE_VLDRBU8 [[MVE_VLDRBU8_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU8_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:rgpr = COPY $r0
-    %2:rgpr, %1:mqpr = MVE_VLDRBU8_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBU8 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:rgpr, %1:mqpr = MVE_VLDRBU8_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU8 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1435,13 +1435,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBS32_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBS32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBS32_pre1:%[0-9]+]]:mqpr = MVE_VLDRBS32_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBS32_:%[0-9]+]]:mqpr = MVE_VLDRBS32 [[MVE_VLDRBS32_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBS32_pre1:%[0-9]+]]:mqpr = MVE_VLDRBS32_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS32_:%[0-9]+]]:mqpr = MVE_VLDRBS32 [[MVE_VLDRBS32_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBS32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBS32_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBS32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBS32_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBS32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1463,13 +1463,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU32_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBU32_pre1:%[0-9]+]]:mqpr = MVE_VLDRBU32_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBU32_:%[0-9]+]]:mqpr = MVE_VLDRBU32 [[MVE_VLDRBU32_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBU32_pre1:%[0-9]+]]:mqpr = MVE_VLDRBU32_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU32_:%[0-9]+]]:mqpr = MVE_VLDRBU32 [[MVE_VLDRBU32_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBU32_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBU32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBU32_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1491,13 +1491,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHS32_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHS32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRHS32_pre1:%[0-9]+]]:mqpr = MVE_VLDRHS32_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRHS32_:%[0-9]+]]:mqpr = MVE_VLDRHS32 [[MVE_VLDRHS32_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHS32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRHS32_pre1:%[0-9]+]]:mqpr = MVE_VLDRHS32_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHS32_:%[0-9]+]]:mqpr = MVE_VLDRHS32 [[MVE_VLDRHS32_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHS32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRHS32_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRHS32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRHS32_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHS32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1519,13 +1519,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRHU32_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRHU32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRHU32_pre1:%[0-9]+]]:mqpr = MVE_VLDRHU32_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRHU32_:%[0-9]+]]:mqpr = MVE_VLDRHU32 [[MVE_VLDRHU32_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU32_pre:%[0-9]+]]:tgpr, [[MVE_VLDRHU32_pre1:%[0-9]+]]:mqpr = MVE_VLDRHU32_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRHU32_:%[0-9]+]]:mqpr = MVE_VLDRHU32 [[MVE_VLDRHU32_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRHU32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRHU32_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRHU32 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRHU32_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRHU32 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1547,13 +1547,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBS16_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBS16_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBS16_pre1:%[0-9]+]]:mqpr = MVE_VLDRBS16_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBS16_:%[0-9]+]]:mqpr = MVE_VLDRBS16 [[MVE_VLDRBS16_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS16_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBS16_pre1:%[0-9]+]]:mqpr = MVE_VLDRBS16_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBS16_:%[0-9]+]]:mqpr = MVE_VLDRBS16 [[MVE_VLDRBS16_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBS16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBS16_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBS16 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBS16_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBS16 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1575,13 +1575,13 @@ body:             |
     ; CHECK-LABEL: name: MVE_VLDRBU16_pre
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VLDRBU16_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBU16_pre1:%[0-9]+]]:mqpr = MVE_VLDRBU16_pre [[COPY]], 32, 0, $noreg :: (load (s128), align 8)
-    ; CHECK: [[MVE_VLDRBU16_:%[0-9]+]]:mqpr = MVE_VLDRBU16 [[MVE_VLDRBU16_pre]], -16, 0, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU16_pre:%[0-9]+]]:tgpr, [[MVE_VLDRBU16_pre1:%[0-9]+]]:mqpr = MVE_VLDRBU16_pre [[COPY]], 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    ; CHECK: [[MVE_VLDRBU16_:%[0-9]+]]:mqpr = MVE_VLDRBU16 [[MVE_VLDRBU16_pre]], -16, 0, $noreg, $noreg :: (load (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VLDRBU16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %0:tgpr = COPY $r0
-    %2:tgpr, %1:mqpr = MVE_VLDRBU16_pre %0, 32, 0, $noreg :: (load (s128), align 8)
-    %1:mqpr = MVE_VLDRBU16 %0, 16, 0, $noreg :: (load (s128), align 8)
+    %2:tgpr, %1:mqpr = MVE_VLDRBU16_pre %0, 32, 0, $noreg, $noreg :: (load (s128), align 8)
+    %1:mqpr = MVE_VLDRBU16 %0, 16, 0, $noreg, $noreg :: (load (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1604,14 +1604,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRWU32_pre:%[0-9]+]]:rgpr = MVE_VSTRWU32_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRWU32 [[COPY]], [[MVE_VSTRWU32_pre]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRWU32_pre:%[0-9]+]]:rgpr = MVE_VSTRWU32_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRWU32 [[COPY]], [[MVE_VSTRWU32_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRWU32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:rgpr = COPY $r0
-    %2:rgpr = MVE_VSTRWU32_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRWU32 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:rgpr = MVE_VSTRWU32_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRWU32 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1634,14 +1634,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRHU16_pre:%[0-9]+]]:rgpr = MVE_VSTRHU16_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRHU16 [[COPY]], [[MVE_VSTRHU16_pre]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRHU16_pre:%[0-9]+]]:rgpr = MVE_VSTRHU16_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRHU16 [[COPY]], [[MVE_VSTRHU16_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRHU16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:rgpr = COPY $r0
-    %2:rgpr = MVE_VSTRHU16_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRHU16 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:rgpr = MVE_VSTRHU16_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRHU16 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1664,14 +1664,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:rgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRBU8_pre:%[0-9]+]]:rgpr = MVE_VSTRBU8_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRBU8 [[COPY]], [[MVE_VSTRBU8_pre]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRBU8_pre:%[0-9]+]]:rgpr = MVE_VSTRBU8_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRBU8 [[COPY]], [[MVE_VSTRBU8_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRBU8_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:rgpr = COPY $r0
-    %2:rgpr = MVE_VSTRBU8_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRBU8 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:rgpr = MVE_VSTRBU8_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRBU8 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1694,14 +1694,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRH32_pre:%[0-9]+]]:tgpr = MVE_VSTRH32_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRH32 [[COPY]], [[MVE_VSTRH32_pre]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRH32_pre:%[0-9]+]]:tgpr = MVE_VSTRH32_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRH32 [[COPY]], [[MVE_VSTRH32_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRH32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRH32_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRH32 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRH32_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRH32 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1724,14 +1724,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB32_pre:%[0-9]+]]:tgpr = MVE_VSTRB32_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB32 [[COPY]], [[MVE_VSTRB32_pre]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB32_pre:%[0-9]+]]:tgpr = MVE_VSTRB32_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB32 [[COPY]], [[MVE_VSTRB32_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB32_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB32_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB32 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB32_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB32 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1754,14 +1754,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], -16, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1784,18 +1784,18 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], -16, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], -48, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], 2, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], -16, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], -48, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre]], 2, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, -16, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, 34, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, -16, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 34, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1819,16 +1819,16 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: [[MVE_VSTRB16_pre1:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 64, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre1]], -48, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_pre1:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 64, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[MVE_VSTRB16_pre1]], -48, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB16_pre1]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    %3:tgpr = MVE_VSTRB16_pre %1, %0, 64, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, 16, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    %3:tgpr = MVE_VSTRB16_pre %1, %0, 64, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 16, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %3
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1852,15 +1852,15 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[COPY1]], 0, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[COPY1]], 0, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: [[t2ADDri:%[0-9]+]]:tgpr = nuw t2ADDri [[COPY1]], 32, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: $r0 = COPY [[t2ADDri]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, 0, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB16_pre %1, %0, 32, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 0, 0, $noreg, $noreg :: (store (s128), align 8)
     %3:tgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %3
     tBX_RET 14, $noreg, implicit $r0
@@ -1884,14 +1884,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRBU8_pre:%[0-9]+]]:tgpr = MVE_VSTRBU8_pre [[COPY]], [[COPY1]], 33, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRWU32 [[COPY]], [[COPY1]], 0, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRBU8_pre:%[0-9]+]]:tgpr = MVE_VSTRBU8_pre [[COPY]], [[COPY1]], 33, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRWU32 [[COPY]], [[COPY1]], 0, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRBU8_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRBU8_pre %1, %0, 33, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRWU32 %1, %0, 0, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRBU8_pre %1, %0, 33, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRWU32 %1, %0, 0, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1914,14 +1914,14 @@ body:             |
     ; CHECK: liveins: $r0, $q0
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
-    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 100, 0, $noreg :: (store (s128), align 8)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[COPY1]], -100, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: [[MVE_VSTRB16_pre:%[0-9]+]]:tgpr = MVE_VSTRB16_pre [[COPY]], [[COPY1]], 100, 0, $noreg, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[COPY1]], -100, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[MVE_VSTRB16_pre]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
-    %2:tgpr = MVE_VSTRB16_pre %1, %0, 100, 0, $noreg :: (store (s128), align 8)
-    MVE_VSTRB16 %1, %0, -100, 0, $noreg :: (store (s128), align 8)
+    %2:tgpr = MVE_VSTRB16_pre %1, %0, 100, 0, $noreg, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, -100, 0, $noreg, $noreg :: (store (s128), align 8)
     $r0 = COPY %2
     tBX_RET 14, $noreg, implicit $r0
 
@@ -1946,13 +1946,13 @@ body:             |
     ; CHECK: [[COPY:%[0-9]+]]:mqpr = COPY $q0
     ; CHECK: [[COPY1:%[0-9]+]]:tgpr = COPY $r0
     ; CHECK: [[t2LDRB_POST:%[0-9]+]]:rgpr, [[t2LDRB_POST1:%[0-9]+]]:tgpr = t2LDRB_POST [[COPY1]], 32, 14 /* CC::al */, $noreg :: (load (s8), align 2)
-    ; CHECK: MVE_VSTRB16 [[COPY]], [[t2LDRB_POST1]], -22, 0, $noreg :: (store (s128), align 8)
+    ; CHECK: MVE_VSTRB16 [[COPY]], [[t2LDRB_POST1]], -22, 0, $noreg, $noreg :: (store (s128), align 8)
     ; CHECK: $r0 = COPY [[t2LDRB_POST1]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $r0
     %1:mqpr = COPY $q0
     %0:tgpr = COPY $r0
     %2:rgpr = t2LDRBi12 %0:tgpr, 0, 14, $noreg :: (load (s8), align 2)
-    MVE_VSTRB16 %1, %0, 10, 0, $noreg :: (store (s128), align 8)
+    MVE_VSTRB16 %1, %0, 10, 0, $noreg, $noreg :: (store (s128), align 8)
     %3:rgpr = nuw t2ADDri %0, 32, 14, $noreg, $noreg
     $r0 = COPY %3
     tBX_RET 14, $noreg, implicit $r0

diff  --git a/llvm/test/CodeGen/Thumb2/mve-stacksplot.mir b/llvm/test/CodeGen/Thumb2/mve-stacksplot.mir
index 4fd4c01b1b2ca..fecd8e799657d 100644
--- a/llvm/test/CodeGen/Thumb2/mve-stacksplot.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-stacksplot.mir
@@ -41,7 +41,7 @@ body: |
     ; CHECK: $lr = IMPLICIT_DEF
     ; CHECK: t2STRi12 killed $r0, $sp, 0, 14 /* CC::al */, $noreg :: (store (s32) into %stack.1)
     ; CHECK: $r0 = tMOVr killed $sp, 14 /* CC::al */, $noreg
-    ; CHECK: renamable $q2 = MVE_VLDRBU32 killed $r0, 16, 0, $noreg :: (load (s32) from %stack.0 + 12)
+    ; CHECK: renamable $q2 = MVE_VLDRBU32 killed $r0, 16, 0, $noreg, $noreg :: (load (s32) from %stack.0 + 12)
     ; CHECK: $r0 = t2LDRi12 $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.1)
     ; CHECK: KILL $r0
     ; CHECK: KILL $r1
@@ -72,7 +72,7 @@ body: |
     $r12 = IMPLICIT_DEF
     $lr = IMPLICIT_DEF
 
-    renamable $q2 = MVE_VLDRBU32 %stack.0, 12, 0, $noreg :: (load (s32) from %stack.0 + 12)
+    renamable $q2 = MVE_VLDRBU32 %stack.0, 12, 0, $noreg, $noreg :: (load (s32) from %stack.0 + 12)
 
     KILL $r0
     KILL $r1
@@ -134,7 +134,7 @@ body: |
     ; CHECK: $lr = IMPLICIT_DEF
     ; CHECK: t2STRi12 killed $r0, $sp, 0, 14 /* CC::al */, $noreg :: (store (s32) into %stack.2)
     ; CHECK: $r0 = t2ADDri killed $sp, 1152, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: renamable $q2 = MVE_VLDRBU8 killed $r0, 52, 0, $noreg :: (load (s32) from %stack.0)
+    ; CHECK: renamable $q2 = MVE_VLDRBU8 killed $r0, 52, 0, $noreg, $noreg :: (load (s32) from %stack.0)
     ; CHECK: $r0 = t2LDRi12 $sp, 0, 14 /* CC::al */, $noreg :: (load (s32) from %stack.2)
     ; CHECK: KILL $r0
     ; CHECK: KILL $r1
@@ -165,7 +165,7 @@ body: |
     $r12 = IMPLICIT_DEF
     $lr = IMPLICIT_DEF
 
-    renamable $q2 = MVE_VLDRBU8 %stack.0, 0, 0, $noreg :: (load (s32) from %stack.0)
+    renamable $q2 = MVE_VLDRBU8 %stack.0, 0, 0, $noreg, $noreg :: (load (s32) from %stack.0)
 
     KILL $r0
     KILL $r1

diff  --git a/llvm/test/CodeGen/Thumb2/mve-tp-loop.mir b/llvm/test/CodeGen/Thumb2/mve-tp-loop.mir
index 111893d06ca18..30e80f3033817 100644
--- a/llvm/test/CodeGen/Thumb2/mve-tp-loop.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-tp-loop.mir
@@ -70,15 +70,16 @@ body:             |
     ; CHECK: [[t2LSRri:%[0-9]+]]:rgpr = t2LSRri killed [[t2ADDri]], 4, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: [[t2WhileLoopSetup:%[0-9]+]]:gprlr = t2WhileLoopSetup killed [[t2LSRri]]
     ; CHECK: t2WhileLoopStart [[t2WhileLoopSetup]], %bb.2, implicit-def $cpsr
+    ; CHECK: t2B %bb.1, 14 /* CC::al */, $noreg
     ; CHECK: .1:
     ; CHECK: [[PHI:%[0-9]+]]:rgpr = PHI [[COPY1]], %bb.0, %7, %bb.1
     ; CHECK: [[PHI1:%[0-9]+]]:rgpr = PHI [[COPY2]], %bb.0, %9, %bb.1
     ; CHECK: [[PHI2:%[0-9]+]]:gprlr = PHI [[t2WhileLoopSetup]], %bb.0, %11, %bb.1
     ; CHECK: [[PHI3:%[0-9]+]]:rgpr = PHI [[COPY]], %bb.0, %13, %bb.1
-    ; CHECK: [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI3]], 0, $noreg
+    ; CHECK: [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI3]], 0, $noreg, $noreg
     ; CHECK: [[t2SUBri:%[0-9]+]]:rgpr = t2SUBri [[PHI3]], 16, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[PHI]], 16, 1, [[MVE_VCTP8_]]
-    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[MVE_VLDRBU8_post1]], [[PHI1]], 16, 1, [[MVE_VCTP8_]]
+    ; CHECK: [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[PHI]], 16, 1, [[MVE_VCTP8_]], $noreg
+    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[MVE_VLDRBU8_post1]], [[PHI1]], 16, 1, [[MVE_VCTP8_]], $noreg
     ; CHECK: [[t2LoopDec:%[0-9]+]]:gprlr = t2LoopDec [[PHI2]], 1
     ; CHECK: t2LoopEnd [[t2LoopDec]], %bb.1, implicit-def $cpsr
     ; CHECK: t2B %bb.2, 14 /* CC::al */, $noreg
@@ -110,15 +111,16 @@ body:             |
   ; CHECK:   [[t2LSRri:%[0-9]+]]:rgpr = t2LSRri killed [[t2ADDri]], 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   [[t2WhileLoopSetup:%[0-9]+]]:gprlr = t2WhileLoopSetup killed [[t2LSRri]]
   ; CHECK:   t2WhileLoopStart [[t2WhileLoopSetup]], %bb.4, implicit-def $cpsr
+  ; CHECK:   t2B %bb.3, 14 /* CC::al */, $noreg
   ; CHECK: bb.3:
   ; CHECK:   [[PHI:%[0-9]+]]:rgpr = PHI [[COPY1]], %bb.1, %7, %bb.3
   ; CHECK:   [[PHI1:%[0-9]+]]:rgpr = PHI [[COPY2]], %bb.1, %9, %bb.3
   ; CHECK:   [[PHI2:%[0-9]+]]:gprlr = PHI [[t2WhileLoopSetup]], %bb.1, %11, %bb.3
   ; CHECK:   [[PHI3:%[0-9]+]]:rgpr = PHI [[COPY]], %bb.1, %13, %bb.3
-  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI3]], 0, $noreg
+  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI3]], 0, $noreg, $noreg
   ; CHECK:   [[t2SUBri:%[0-9]+]]:rgpr = t2SUBri [[PHI3]], 16, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[PHI]], 16, 1, [[MVE_VCTP8_]]
-  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[MVE_VLDRBU8_post1]], [[PHI1]], 16, 1, [[MVE_VCTP8_]]
+  ; CHECK:   [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post [[PHI]], 16, 1, [[MVE_VCTP8_]], $noreg
+  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[MVE_VLDRBU8_post1]], [[PHI1]], 16, 1, [[MVE_VCTP8_]], $noreg
   ; CHECK:   [[t2LoopDec:%[0-9]+]]:gprlr = t2LoopDec [[PHI2]], 1
   ; CHECK:   t2LoopEnd [[t2LoopDec]], %bb.3, implicit-def $cpsr
   ; CHECK:   t2B %bb.4, 14 /* CC::al */, $noreg
@@ -162,13 +164,14 @@ body:             |
     ; CHECK: [[t2LSRri:%[0-9]+]]:rgpr = t2LSRri killed [[t2ADDri]], 4, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: [[t2WhileLoopSetup:%[0-9]+]]:gprlr = t2WhileLoopSetup killed [[t2LSRri]]
     ; CHECK: t2WhileLoopStart [[t2WhileLoopSetup]], %bb.2, implicit-def $cpsr
+    ; CHECK: t2B %bb.1, 14 /* CC::al */, $noreg
     ; CHECK: .1:
     ; CHECK: [[PHI:%[0-9]+]]:rgpr = PHI [[COPY2]], %bb.0, %7, %bb.1
     ; CHECK: [[PHI1:%[0-9]+]]:gprlr = PHI [[t2WhileLoopSetup]], %bb.0, %9, %bb.1
     ; CHECK: [[PHI2:%[0-9]+]]:rgpr = PHI [[COPY]], %bb.0, %11, %bb.1
-    ; CHECK: [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI2]], 0, $noreg
+    ; CHECK: [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI2]], 0, $noreg, $noreg
     ; CHECK: [[t2SUBri:%[0-9]+]]:rgpr = t2SUBri [[PHI2]], 16, 14 /* CC::al */, $noreg, $noreg
-    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY1]], [[PHI]], 16, 1, [[MVE_VCTP8_]]
+    ; CHECK: [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY1]], [[PHI]], 16, 1, [[MVE_VCTP8_]], $noreg
     ; CHECK: [[t2LoopDec:%[0-9]+]]:gprlr = t2LoopDec [[PHI1]], 1
     ; CHECK: t2LoopEnd [[t2LoopDec]], %bb.1, implicit-def $cpsr
     ; CHECK: t2B %bb.2, 14 /* CC::al */, $noreg
@@ -201,13 +204,14 @@ body:             |
   ; CHECK:   [[t2LSRri:%[0-9]+]]:rgpr = t2LSRri killed [[t2ADDri]], 4, 14 /* CC::al */, $noreg, $noreg
   ; CHECK:   [[t2WhileLoopSetup:%[0-9]+]]:gprlr = t2WhileLoopSetup killed [[t2LSRri]]
   ; CHECK:   t2WhileLoopStart [[t2WhileLoopSetup]], %bb.4, implicit-def $cpsr
+  ; CHECK:   t2B %bb.3, 14 /* CC::al */, $noreg
   ; CHECK: bb.3:
   ; CHECK:   [[PHI:%[0-9]+]]:rgpr = PHI [[COPY2]], %bb.1, %7, %bb.3
   ; CHECK:   [[PHI1:%[0-9]+]]:gprlr = PHI [[t2WhileLoopSetup]], %bb.1, %9, %bb.3
   ; CHECK:   [[PHI2:%[0-9]+]]:rgpr = PHI [[COPY]], %bb.1, %11, %bb.3
-  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI2]], 0, $noreg
+  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[PHI2]], 0, $noreg, $noreg
   ; CHECK:   [[t2SUBri:%[0-9]+]]:rgpr = t2SUBri [[PHI2]], 16, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY1]], [[PHI]], 16, 1, [[MVE_VCTP8_]]
+  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post [[COPY1]], [[PHI]], 16, 1, [[MVE_VCTP8_]], $noreg
   ; CHECK:   [[t2LoopDec:%[0-9]+]]:gprlr = t2LoopDec [[PHI1]], 1
   ; CHECK:   t2LoopEnd [[t2LoopDec]], %bb.3, implicit-def $cpsr
   ; CHECK:   t2B %bb.4, 14 /* CC::al */, $noreg

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir
index f1495509df6d8..074c95986e6da 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir
@@ -70,20 +70,20 @@ body:             |
     ; CHECK: renamable $r4 = t2ADDrr renamable $r2, renamable $r10, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: BUNDLE implicit-def $vpr, implicit-def $q6, implicit-def $d12, implicit-def $s24, implicit-def $s25, implicit-def $d13, implicit-def $s26, implicit-def $s27, implicit $q1, implicit $q5, implicit killed $r4 {
     ; CHECK:   MVE_VPTv4u32 8, renamable $q1, renamable $q5, 2, implicit-def $vpr
-    ; CHECK:   renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, internal renamable $vpr
+    ; CHECK:   renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, internal renamable $vpr, $noreg
     ; CHECK: }
     ; CHECK: renamable $r4 = t2ADDrr renamable $r11, renamable $r10, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: BUNDLE implicit-def $q7, implicit-def $d14, implicit-def $s28, implicit-def $s29, implicit-def $d15, implicit-def $s30, implicit-def $s31, implicit killed $vpr, implicit killed $r4 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q7 = MVE_VLDRBU32 killed renamable $r4, 0, 1, killed renamable $vpr
+    ; CHECK:   renamable $q7 = MVE_VLDRBU32 killed renamable $r4, 0, 1, killed renamable $vpr, $noreg
     ; CHECK: }
     ; CHECK: t2LoopEnd renamable $lr, %bb.0, implicit-def dead $cpsr
     ; CHECK: t2B %bb.0, 14 /* CC::al */, $noreg
-  renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q5, 2, 0, $noreg
+  renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q5, 2, 0, $noreg, $noreg
   renamable $r4 = t2ADDrr renamable $r2, renamable $r10, 14, $noreg, $noreg
-  renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr
+  renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg
   renamable $r4 = t2ADDrr renamable $r11, renamable $r10, 14, $noreg, $noreg
-  renamable $q7 = MVE_VLDRBU32 killed renamable $r4, 0, 1, killed renamable $vpr
+  renamable $q7 = MVE_VLDRBU32 killed renamable $r4, 0, 1, killed renamable $vpr, $noreg
   t2LoopEnd renamable $lr, %bb.0, implicit-def dead $cpsr
   t2B %bb.0, 14, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir
index 506b8d2687c6a..567f23bed1af4 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir
@@ -67,22 +67,22 @@ body:             |
     ; CHECK-LABEL: name: vpt_2_blocks_2_preds
     ; CHECK: liveins: $q0, $q1, $q2, $r0, $r1
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
-    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit killed $vpr, implicit killed $q1, implicit $q2, implicit killed $q3 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
+    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
     ; CHECK: }
     ; CHECK: $vpr = VMSR_P0 killed $r1, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $q3, implicit killed $q2, implicit killed $q0 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     ; CHECK: }
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
+    $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
     $vpr = VMSR_P0 killed $r1, 14, $noreg
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir
index 391b74ee0dacd..a66f9f9393810 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir
@@ -68,33 +68,33 @@ body:             |
   ; CHECK:   successors: %bb.1(0x80000000)
   ; CHECK:   liveins: $q0, $q1, $q2, $r0
   ; CHECK:   $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
-  ; CHECK:   $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+  ; CHECK:   $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
   ; CHECK:   BUNDLE implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit-def $q1, implicit-def $d2, implicit-def $s4, implicit-def $s5, implicit-def $d3, implicit-def $s6, implicit-def $s7, implicit $vpr, implicit killed $q1, implicit $q2, implicit killed $q3 {
   ; CHECK:     MVE_VPST 4, implicit $vpr
-  ; CHECK:     renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-  ; CHECK:     renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal renamable $q3, internal renamable $q3, 1, renamable $vpr, undef renamable $q1
+  ; CHECK:     renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+  ; CHECK:     renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal renamable $q3, internal renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q1
   ; CHECK:   }
   ; CHECK: bb.1.bb2:
   ; CHECK:   liveins: $q0, $q1, $q2, $q3, $vpr
   ; CHECK:   BUNDLE implicit-def dead $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $q1, implicit killed $q2, implicit killed $q3, implicit killed $q0 {
   ; CHECK:     MVE_VPST 4, implicit $vpr
-  ; CHECK:     renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-  ; CHECK:     renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+  ; CHECK:     renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+  ; CHECK:     renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
   ; CHECK:   }
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit $q0
   bb.0.entry:
     liveins: $q0, $q1, $q2, $r0
 
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 renamable $q3, renamable $q3, 1, renamable $vpr, undef renamable $q1
+    $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 renamable $q3, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q1
 
   bb.1.bb2:
     liveins: $q0, $q1, $q2, $q3, $vpr
 
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir
index ee26f56b605c7..15508351ea33e 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir
@@ -68,28 +68,28 @@ body:             |
     ; CHECK-LABEL: name: vpt_2_blocks_non_consecutive_ins
     ; CHECK: liveins: $q0, $q1, $q2, $r0
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
-    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit-def $q1, implicit-def $d2, implicit-def $s4, implicit-def $s5, implicit-def $d3, implicit-def $s6, implicit-def $s7, implicit $vpr, implicit killed $q1, implicit $q2, implicit killed $q3 {
     ; CHECK:   MVE_VPST 4, implicit $vpr
-    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, internal renamable $q3, 1, renamable $vpr, undef renamable $q1
+    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, internal renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q1
     ; CHECK: }
-    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $q1, implicit killed $q2, implicit killed $q3, implicit killed $q0 {
     ; CHECK:   MVE_VPST 4, implicit $vpr
-    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     ; CHECK: }
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, renamable $q3, 1, renamable $vpr, undef renamable $q1
+    $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q1
 
-    $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+    $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
 
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir
index 594df59695959..1f0b095eec2b3 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir
@@ -71,24 +71,24 @@ body:             |
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def dead $q2, implicit-def $d4, implicit-def $s8, implicit-def $s9, implicit-def $d5, implicit-def $s10, implicit-def $s11, implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit $vpr, implicit killed $q2, implicit $q3, implicit killed $q0 {
     ; CHECK:   MVE_VPST 1, implicit $vpr
-    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, undef renamable $q2
-    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, internal renamable $q2, 1, renamable $vpr, internal undef renamable $q2
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, renamable $q3, 1, renamable $vpr, killed renamable $q0
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q0, renamable $q3, 1, renamable $vpr, internal undef renamable $q0
+    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q2
+    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, internal renamable $q2, 1, renamable $vpr, $noreg, internal undef renamable $q2
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, killed renamable $q0
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q0, renamable $q3, 1, renamable $vpr, $noreg, internal undef renamable $q0
     ; CHECK: }
     ; CHECK: BUNDLE implicit-def $q1, implicit-def $d2, implicit-def $s4, implicit-def $s5, implicit-def $d3, implicit-def $s6, implicit-def $s7, implicit killed $vpr, implicit killed $q0, implicit killed $q3, implicit killed $q1 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, killed renamable $q1
+    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, $noreg, killed renamable $q1
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+    ; CHECK: $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, undef renamable $q2
-    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q2, 1, renamable $vpr, undef renamable $q2
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, killed renamable $q0
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, renamable $q3, 1, renamable $vpr, undef renamable $q0
-    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, killed renamable $q1
-    $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, killed renamable $q0
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q0
+    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, $noreg, killed renamable $q1
+    $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir
index 03cd4ee690bf4..40c333718ee74 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir
@@ -67,27 +67,27 @@ body:             |
     ; CHECK-LABEL: name: vpt_3_blocks_kill_vpr
     ; CHECK: liveins: $q0, $q1, $q2, $r0
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
-    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit-def $q1, implicit-def $d2, implicit-def $s4, implicit-def $s5, implicit-def $d3, implicit-def $s6, implicit-def $s7, implicit $vpr, implicit killed $q1, implicit $q2, implicit killed $q3 {
     ; CHECK:   MVE_VPST 12, implicit $vpr
-    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, internal renamable $q3, 2, renamable $vpr, undef renamable $q1
+    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, internal renamable $q3, 2, renamable $vpr, $noreg, undef renamable $q1
     ; CHECK: }
-    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
+    ; CHECK: $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $q1, implicit killed $q2, implicit killed $q3, implicit killed $q0 {
     ; CHECK:   MVE_VPST 4, implicit $vpr
-    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    ; CHECK:   renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     ; CHECK: }
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, renamable $q3, 1, renamable $vpr, undef renamable $q1
-    $q3 = MVE_VORR $q0, $q0, 0, $noreg, undef $q3
-    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q1
+    $q3 = MVE_VORR $q0, $q0, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q3, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir
index 858bb6a156277..67469e50a5c3b 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir
@@ -67,11 +67,11 @@ body:             |
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $q1, implicit killed $q2, implicit killed $q0 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     ; CHECK: }
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir
index eb47791e4f80a..54621bd307de6 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir
@@ -69,15 +69,15 @@ body:             |
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def dead $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit-def $q1, implicit-def $d2, implicit-def $s4, implicit-def $s5, implicit-def $d3, implicit-def $s6, implicit-def $s7, implicit killed $vpr, implicit killed $q2, implicit killed $q3, implicit killed $q0, implicit killed $q1 {
     ; CHECK:   MVE_VPST 4, implicit $vpr
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, killed renamable $q0
-    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, killed renamable $q1
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, killed renamable $q0
+    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, $noreg, killed renamable $q1
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+    ; CHECK: $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, killed renamable $q0
-    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, killed renamable $q1
-    $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, killed renamable $q0
+    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, $noreg, killed renamable $q1
+    $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir
index 8504ad0814c27..02e50a36284d3 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir
@@ -70,19 +70,19 @@ body:             |
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def dead $q2, implicit-def $d4, implicit-def $s8, implicit-def $s9, implicit-def $d5, implicit-def $s10, implicit-def $s11, implicit-def dead $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit-def $q1, implicit-def $d2, implicit-def $s4, implicit-def $s5, implicit-def $d3, implicit-def $s6, implicit-def $s7, implicit killed $vpr, implicit killed $q2, implicit killed $q3, implicit killed $q0, implicit killed $q1 {
     ; CHECK:   MVE_VPST 1, implicit $vpr
-    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, undef renamable $q2
-    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, internal renamable $q2, 1, renamable $vpr, internal undef renamable $q2
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, renamable $q3, 1, renamable $vpr, killed renamable $q0
-    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, killed renamable $q1
+    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q2
+    ; CHECK:   renamable $q2 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, internal renamable $q2, 1, renamable $vpr, $noreg, internal undef renamable $q2
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, killed renamable $q0
+    ; CHECK:   renamable $q1 = nnan ninf nsz MVE_VMINNMf32 internal killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, $noreg, killed renamable $q1
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+    ; CHECK: $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
     $vpr = VMSR_P0 killed $r0, 14, $noreg
-    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, undef renamable $q2
-    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q2, 1, renamable $vpr, undef renamable $q2
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, killed renamable $q0
-    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, killed renamable $q1
-    $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, undef $q0
+    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q2 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, undef renamable $q2
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q2, renamable $q3, 1, renamable $vpr, $noreg, killed renamable $q0
+    renamable $q1 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q0, killed renamable $q3, 1, killed renamable $vpr, $noreg, killed renamable $q1
+    $q0 = MVE_VORR killed $q1, killed $q1, 0, $noreg, $noreg, undef $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-debug.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-debug.mir
index ce540e5061314..34c24811aa5a9 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-debug.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-debug.mir
@@ -82,13 +82,13 @@ body:             |
     ; CHECK: DBG_VALUE $noreg, $noreg, !20, !DIExpression(), debug-location !21
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q2, implicit-def $d4, implicit-def $s8, implicit-def $s9, implicit-def $d5, implicit-def $s10, implicit-def $s11, implicit killed $q1, implicit killed $q0, implicit killed $q2, debug-location !23 {
     ; CHECK:   MVE_VPTv4s32 12, renamable $q1, renamable $q0, 10, implicit-def $vpr, debug-location !23
-    ; CHECK:   renamable $q2 = MVE_VADDi32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q2, debug-location !23
+    ; CHECK:   renamable $q2 = MVE_VADDi32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q2, debug-location !23
     ; CHECK:   DBG_VALUE $noreg, $noreg, !20, !DIExpression(), debug-location !21
     ; CHECK:   DBG_VALUE internal $q2, $noreg, !19, !DIExpression(), debug-location !21
-    ; CHECK:   renamable $q2 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 2, internal killed renamable $vpr, internal killed renamable $q2, debug-location !25
+    ; CHECK:   renamable $q2 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q2, debug-location !25
     ; CHECK:   DBG_VALUE internal $q2, $noreg, !19, !DIExpression(), debug-location !21
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q2, killed $q2, 0, $noreg, undef $q0, debug-location !26
+    ; CHECK: $q0 = MVE_VORR killed $q2, killed $q2, 0, $noreg, $noreg, undef $q0, debug-location !26
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0, debug-location !26
     DBG_VALUE $q0, $noreg, !17, !DIExpression(), debug-location !21
     DBG_VALUE $q0, $noreg, !17, !DIExpression(), debug-location !21
@@ -96,15 +96,15 @@ body:             |
     DBG_VALUE $q1, $noreg, !18, !DIExpression(), debug-location !21
     DBG_VALUE $q2, $noreg, !19, !DIExpression(), debug-location !21
     DBG_VALUE $q2, $noreg, !19, !DIExpression(), debug-location !21
-    renamable $vpr = MVE_VCMPs32 renamable $q1, renamable $q0, 10, 0, $noreg, debug-location !22
+    renamable $vpr = MVE_VCMPs32 renamable $q1, renamable $q0, 10, 0, $noreg, $noreg, debug-location !22
     DBG_VALUE $noreg, $noreg, !20, !DIExpression(), debug-location !21
-    renamable $q2 = MVE_VADDi32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q2, debug-location !23
+    renamable $q2 = MVE_VADDi32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q2, debug-location !23
     DBG_VALUE $noreg, $noreg, !20, !DIExpression(), debug-location !21
     DBG_VALUE $q2, $noreg, !19, !DIExpression(), debug-location !21
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, debug-location !24
-    renamable $q2 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, killed renamable $q2, debug-location !25
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg, debug-location !24
+    renamable $q2 = MVE_VADDi32 killed renamable $q0, killed renamable $q1, 1, killed renamable $vpr, $noreg, killed renamable $q2, debug-location !25
     DBG_VALUE $q2, $noreg, !19, !DIExpression(), debug-location !21
-    $q0 = MVE_VORR killed $q2, killed $q2, 0, $noreg, undef $q0, debug-location !26
+    $q0 = MVE_VORR killed $q2, killed $q2, 0, $noreg, $noreg, undef $q0, debug-location !26
     tBX_RET 14 /* CC::al */, $noreg, implicit $q0, debug-location !26
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir
index 94cad7477345d..aa46ea145f4e2 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir
@@ -67,181 +67,181 @@ body:             |
 
     ; CHECK-LABEL: name: vpt_block_else
     ; CHECK: liveins: $q0, $q1, $q2
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit killed $q0, implicit $q2, implicit $q1, implicit killed $q3, implicit $zr {
     ; CHECK:   MVE_VPTv4s32 7, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, internal killed renamable $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR $q3, $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR $q3, $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit killed $q0, implicit $q2, implicit $q1, implicit killed $q3, implicit $zr {
     ; CHECK:   MVE_VPTv4s32 7, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, internal killed renamable $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR $q3, $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR $q3, $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit $q1, implicit killed $q3, implicit $zr {
     ; CHECK:   MVE_VPTv4s32 15, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit $q1, implicit killed $q3, implicit $zr {
     ; CHECK:   MVE_VPTv4s32 15, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit $q1, implicit killed $q3 {
     ; CHECK:   MVE_VPTv4s32 15, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit $q1, implicit killed $q3 {
     ; CHECK:   MVE_VPTv4s32 14, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit $q1, implicit killed $q3, implicit $zr {
     ; CHECK:   MVE_VPTv4s32 14, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 2, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit $q1, implicit killed $q3 {
     ; CHECK:   MVE_VPTv4s32 6, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, internal killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, $noreg, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal killed renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit killed $q3 {
     ; CHECK:   MVE_VPTv4s32 11, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, internal killed renamable $q3
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, $noreg, internal killed renamable $q3
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal killed renamable $q3
     ; CHECK: }
-    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
+    ; CHECK: $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+    ; CHECK: $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
     ; CHECK: BUNDLE implicit-def $vpr, implicit-def $q3, implicit-def $d6, implicit-def $s12, implicit-def $s13, implicit-def $d7, implicit-def $s14, implicit-def $s15, implicit $q0, implicit $q2, implicit killed $q3 {
     ; CHECK:   MVE_VPTv4s32 13, renamable $q0, renamable $q2, 10, implicit-def $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, killed renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 2, internal killed renamable $vpr
-    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, internal killed renamable $q3
-    ; CHECK:   renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 1, internal killed renamable $vpr
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, internal renamable $vpr, $noreg, killed renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 2, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 2, internal renamable $vpr, $noreg, internal killed renamable $q3
+    ; CHECK:   renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 1, internal killed renamable $vpr, $noreg
     ; CHECK: }
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, renamable $q3
-    $q0 = MVE_VORR $q3, $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, renamable $q3
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, renamable $q3
-    $q0 = MVE_VORR $q3, $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, renamable $q3
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, renamable $q3
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, renamable $q3
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, renamable $q3
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, renamable $q3
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, killed renamable $q3
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, undef $q0
-
-    $q3 = MVE_VORR $q2, $q2, 0, $noreg, undef $q3
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 1, killed renamable $vpr
-    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, killed renamable $q3
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 1, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, renamable $q3
+    $q0 = MVE_VORR $q3, $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VCMPs32r killed renamable $q0, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, renamable $q3
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, renamable $q3
+    $q0 = MVE_VORR $q3, $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, renamable $q3
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, renamable $q3
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, renamable $q3
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, renamable $q3
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, renamable $q3
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $q3 = MVE_VMAXs32 renamable $q0, renamable $q1, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    $q0 = MVE_VORR killed $q3, killed $q3, 0, $noreg, $noreg, undef $q0
+
+    $q3 = MVE_VORR $q2, $q2, 0, $noreg, $noreg, undef $q3
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 10, 0, $noreg, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 1, killed renamable $vpr, $noreg
+    renamable $q3 = MVE_VORR renamable $q2, renamable $q2, 1, renamable $vpr, $noreg, killed renamable $q3
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32 renamable $q0, renamable $q2, 11, 1, killed renamable $vpr, $noreg
 
     tBX_RET 14, $noreg, implicit $q0
 

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir
index 57be46bb2755c..3fa96947ea948 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir
@@ -109,15 +109,15 @@ body:             |
     ; CHECK: renamable $r3 = t2LDRi12 $r7, 8, 14 /* CC::al */, $noreg :: (load (s32) from %fixed-stack.0)
     ; CHECK: BUNDLE implicit-def $vpr, implicit-def dead $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit $q0, implicit $zr, implicit killed $r0, implicit killed $r3, implicit killed $r1, implicit killed $lr {
     ; CHECK:   MVE_VPTv4f32r 1, renamable $q0, $zr, 10, implicit-def $vpr
-    ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r0, 0, 1, internal renamable $vpr :: (load (s128) from %ir.src, align 4)
-    ; CHECK:   MVE_VSTRWU32 internal killed renamable $q0, killed renamable $r3, 0, 1, internal renamable $vpr :: (store (s128) into %ir.dest, align 4)
-    ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 1, internal renamable $vpr :: (load (s128) from %ir.src2, align 4)
-    ; CHECK:   MVE_VSTRWU32 internal killed renamable $q0, killed renamable $lr, 0, 1, internal renamable $vpr :: (store (s128) into %ir.dest2, align 4)
+    ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r0, 0, 1, internal renamable $vpr, $noreg :: (load (s128) from %ir.src, align 4)
+    ; CHECK:   MVE_VSTRWU32 internal killed renamable $q0, killed renamable $r3, 0, 1, internal renamable $vpr, $noreg :: (store (s128) into %ir.dest, align 4)
+    ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 1, internal renamable $vpr, $noreg :: (load (s128) from %ir.src2, align 4)
+    ; CHECK:   MVE_VSTRWU32 internal killed renamable $q0, killed renamable $lr, 0, 1, internal renamable $vpr, $noreg :: (store (s128) into %ir.dest2, align 4)
     ; CHECK: }
     ; CHECK: BUNDLE implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $r2, implicit killed $r12 {
     ; CHECK:   MVE_VPST 4, implicit $vpr
-    ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 1, renamable $vpr :: (load (s128) from %ir.src3, align 4)
-    ; CHECK:   MVE_VSTRWU32 internal renamable $q0, killed renamable $r12, 0, 1, killed renamable $vpr :: (store (s128) into %ir.dest3, align 4)
+    ; CHECK:   renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.src3, align 4)
+    ; CHECK:   MVE_VSTRWU32 internal renamable $q0, killed renamable $r12, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.dest3, align 4)
     ; CHECK: }
     ; CHECK: $sp = t2LDMIA_RET $sp, 14 /* CC::al */, $noreg, def $r7, def $pc, implicit $q0
     $sp = frame-setup t2STMDB_UPD $sp, 14, $noreg, killed $r7, killed $lr
@@ -129,13 +129,13 @@ body:             |
     renamable $r12 = t2LDRi12 $r7, 16, 14, $noreg :: (load (s32) from %fixed-stack.1)
     renamable $lr = t2LDRi12 $r7, 12, 14, $noreg :: (load (s32) from %fixed-stack.2)
     renamable $r3 = t2LDRi12 $r7, 8, 14, $noreg :: (load (s32) from %fixed-stack.3)
-    renamable $vpr = MVE_VCMPf32r renamable $q0, $zr, 10, 0, $noreg
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r0, 0, 1, renamable $vpr :: (load (s128) from %ir.src, align 4)
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $r3, 0, 1, renamable $vpr :: (store (s128) into %ir.dest, align 4)
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 1, renamable $vpr :: (load (s128) from %ir.src2, align 4)
-    MVE_VSTRWU32 killed renamable $q0, killed renamable $lr, 0, 1, renamable $vpr :: (store (s128) into %ir.dest2, align 4)
-    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 1, renamable $vpr :: (load (s128) from %ir.src3, align 4)
-    MVE_VSTRWU32 renamable $q0, killed renamable $r12, 0, 1, killed renamable $vpr :: (store (s128) into %ir.dest3, align 4)
+    renamable $vpr = MVE_VCMPf32r renamable $q0, $zr, 10, 0, $noreg, $noreg
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r0, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.src, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $r3, 0, 1, renamable $vpr, $noreg :: (store (s128) into %ir.dest, align 4)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r1, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.src2, align 4)
+    MVE_VSTRWU32 killed renamable $q0, killed renamable $lr, 0, 1, renamable $vpr, $noreg :: (store (s128) into %ir.dest2, align 4)
+    renamable $q0 = MVE_VLDRWU32 killed renamable $r2, 0, 1, renamable $vpr, $noreg :: (load (s128) from %ir.src3, align 4)
+    MVE_VSTRWU32 renamable $q0, killed renamable $r12, 0, 1, killed renamable $vpr, $noreg :: (store (s128) into %ir.dest3, align 4)
     $sp = t2LDMIA_RET $sp, 14, $noreg, def $r7, def $pc, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-kill.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-kill.mir
index b222b9f7bbd99..f37c3fc9920b2 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-kill.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-kill.mir
@@ -19,20 +19,20 @@ body:             |
     ; CHECK-LABEL: name: b
     ; CHECK: liveins: $r0, $r1, $r4, $r6, $r7, $lr
     ; CHECK: $sp = frame-setup t2STMDB_UPD $sp, 14 /* CC::al */, $noreg, killed $r4, killed $r6, killed $r7, killed $lr
-    ; CHECK: renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
-    ; CHECK: renamable $q1 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q1
-    ; CHECK: renamable $q2 = MVE_VADD_qr_i32 renamable $q1, renamable $r1, 0, $noreg, undef renamable $q2
+    ; CHECK: renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
+    ; CHECK: renamable $q1 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q1
+    ; CHECK: renamable $q2 = MVE_VADD_qr_i32 renamable $q1, renamable $r1, 0, $noreg, $noreg, undef renamable $q2
     ; CHECK: BUNDLE implicit-def dead $vpr, implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $q0, implicit $q1, implicit killed $q2 {
     ; CHECK:   MVE_VPTv4u32 8, renamable $q0, renamable $q1, 8, implicit-def $vpr
-    ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q0, killed renamable $q2, 1, internal killed renamable $vpr, renamable $q0
+    ; CHECK:   renamable $q0 = MVE_VORR killed renamable $q0, killed renamable $q2, 1, internal killed renamable $vpr, $noreg, renamable $q0
     ; CHECK: }
     ; CHECK: $sp = frame-destroy t2LDMIA_RET $sp, 14 /* CC::al */, $noreg, def $r4, def $r6, def $r7, def $pc, implicit undef $r0
     $sp = frame-setup t2STMDB_UPD $sp, 14 /* CC::al */, $noreg, killed $r4, killed $r6, killed $r7, killed $lr
-    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, undef renamable $q0
-    renamable $q1 = MVE_VMOVimmi32 1, 0, $noreg, undef renamable $q1
-    renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q1, 8, 0, $noreg
-    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q1, renamable $r1, 0, $noreg, undef renamable $q2
-    renamable $q0 = MVE_VORR killed renamable $q0, killed renamable $q2, 1, killed renamable $vpr, renamable $q0
+    renamable $q0 = MVE_VMOVimmi32 0, 0, $noreg, $noreg, undef renamable $q0
+    renamable $q1 = MVE_VMOVimmi32 1, 0, $noreg, $noreg, undef renamable $q1
+    renamable $vpr = MVE_VCMPu32 renamable $q0, renamable $q1, 8, 0, $noreg, $noreg
+    renamable $q2 = MVE_VADD_qr_i32 killed renamable $q1, renamable $r1, 0, $noreg, $noreg, undef renamable $q2
+    renamable $q0 = MVE_VORR killed renamable $q0, killed renamable $q2, 1, killed renamable $vpr, $noreg, renamable $q0
     $sp = frame-destroy t2LDMIA_RET $sp, 14 /* CC::al */, $noreg, def $r4, def $r6, def $r7, def $pc, implicit undef $r0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir
index 91643e5ccfabe..df56c9506698b 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir
@@ -68,11 +68,11 @@ body:             |
     ; CHECK: $vpr = VMSR_P0 killed $r0, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def $q0, implicit-def $d0, implicit-def $s0, implicit-def $s1, implicit-def $d1, implicit-def $s2, implicit-def $s3, implicit killed $vpr, implicit killed $q1, implicit killed $q2, implicit killed $q0 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    ; CHECK:   renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     ; CHECK: }
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-      $vpr = VMSR_P0 killed $r0, 14, $noreg
-    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, killed renamable $q0
+    $vpr = VMSR_P0 killed $r0, 14, $noreg
+    renamable $q0 = nnan ninf nsz MVE_VMINNMf32 killed renamable $q1, killed renamable $q2, 1, killed renamable $vpr, $noreg, killed renamable $q0
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-nots.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-nots.mir
index 8bc7a0b535989..73c8b2add4fe6 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-nots.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-nots.mir
@@ -63,16 +63,16 @@ body:             |
     ; CHECK: liveins: $q0, $q1, $q2
     ; CHECK: BUNDLE implicit-def $vpr, implicit $q0, implicit $zr, implicit $q1, implicit killed $q2 {
     ; CHECK:   MVE_VPTv4s32r 12, renamable $q0, $zr, 11, implicit-def $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 2, internal killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr, $noreg
+    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 2, internal killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     tBX_RET 14, $noreg, implicit $q0
 
 ...
@@ -91,29 +91,29 @@ body:             |
   ; CHECK:   liveins: $q0, $q1, $q2
   ; CHECK:   BUNDLE implicit-def $vpr, implicit $q0, implicit $zr, implicit $q1 {
   ; CHECK:     MVE_VPTv4s32r 8, renamable $q0, $zr, 11, implicit-def $vpr
-  ; CHECK:     renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr
+  ; CHECK:     renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr, $noreg
   ; CHECK:   }
-  ; CHECK:   renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+  ; CHECK:   renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
   ; CHECK: bb.1.bb2:
   ; CHECK:   liveins: $q0, $q1, $q2, $vpr
   ; CHECK:   BUNDLE implicit-def $vpr, implicit killed $vpr, implicit killed $q2, implicit $zr {
   ; CHECK:     MVE_VPST 8, implicit $vpr
-  ; CHECK:     renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
+  ; CHECK:     renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
   ; CHECK:   }
-  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+  ; CHECK:   renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit $q0
   bb.0.entry:
     liveins: $q0, $q1, $q2
 
-    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
 
   bb.1.bb2:
     liveins: $q0, $q1, $q2, $vpr
 
-    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     tBX_RET 14, $noreg, implicit $q0
 
 ...
@@ -133,22 +133,22 @@ body:             |
     ; CHECK: liveins: $q0, $q1, $q2
     ; CHECK: BUNDLE implicit-def $vpr, implicit $q0, implicit $zr, implicit $q1 {
     ; CHECK:   MVE_VPTv4s32r 8, renamable $q0, $zr, 11, implicit-def $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
     ; CHECK: BUNDLE implicit-def $vpr, implicit killed $vpr, implicit killed $q2, implicit $zr {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     tBX_RET 14, $noreg, implicit $q0
 
 ...
@@ -168,28 +168,28 @@ body:             |
     ; CHECK: liveins: $q0, $q1, $q2
     ; CHECK: BUNDLE implicit-def $vpr, implicit $q0, implicit $zr, implicit $q1 {
     ; CHECK:   MVE_VPTv4s32r 8, renamable $q0, $zr, 11, implicit-def $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, internal killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
     ; CHECK: BUNDLE implicit-def $vpr, implicit killed $vpr, implicit killed $q2, implicit $zr {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     tBX_RET 14, $noreg, implicit $q0
 
 ...
@@ -207,20 +207,20 @@ body:             |
 
     ; CHECK-LABEL: name: vpnot_first
     ; CHECK: liveins: $q0, $q1, $q2
-    ; CHECK: renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+    ; CHECK: renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
     ; CHECK: BUNDLE implicit-def $vpr, implicit killed $vpr, implicit $q1, implicit $zr, implicit killed $q2 {
     ; CHECK:   MVE_VPST 4, implicit $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, internal killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, internal killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     tBX_RET 14, $noreg, implicit $q0
 
 ...
@@ -238,23 +238,23 @@ body:             |
 
     ; CHECK-LABEL: name: vpnot_many
     ; CHECK: liveins: $q0, $q1, $q2
-    ; CHECK: renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
+    ; CHECK: renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
     ; CHECK: BUNDLE implicit-def $vpr, implicit killed $vpr, implicit $q1, implicit $zr, implicit killed $q2 {
     ; CHECK:   MVE_VPST 12, implicit $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 2, internal killed renamable $vpr
+    ; CHECK:   renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    ; CHECK:   renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 2, internal killed renamable $vpr, $noreg
     ; CHECK: }
-    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    ; CHECK: renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    ; CHECK: renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit $q0
-    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr
-    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg
-    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr
+    renamable $vpr = MVE_VCMPs32r renamable $q0, $zr, 11, 0, $noreg, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPs32r renamable $q1, $zr, 12, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $vpr = MVE_VCMPi32r killed renamable $q2, $zr, 0, 1, killed renamable $vpr, $noreg
+    renamable $vpr = MVE_VPNOT killed renamable $vpr, 0, $noreg, $noreg
+    renamable $q0 = MVE_VPSEL killed renamable $q0, killed renamable $q1, 0, killed renamable $vpr, $noreg
     tBX_RET 14, $noreg, implicit $q0
 
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-optimisations.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-optimisations.mir
index db652d9de7940..4c6959db9009f 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-optimisations.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-optimisations.mir
@@ -91,192 +91,192 @@ body:             |
   ; CHECK-LABEL: name: vcmp_with_opposite_cond
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf16_]], 0, $noreg, $noreg
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.2(0x80000000)
-  ; CHECK:   [[MVE_VCMPf32_:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPf32_:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf32_]], 0, $noreg, $noreg
   ; CHECK: bb.2:
   ; CHECK:   successors: %bb.3(0x80000000)
-  ; CHECK:   [[MVE_VCMPi16_:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi16_:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi16_]], 0, $noreg, $noreg
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.4(0x80000000)
-  ; CHECK:   [[MVE_VCMPi32_:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi32_:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi32_]], 0, $noreg, $noreg
   ; CHECK: bb.4:
   ; CHECK:   successors: %bb.5(0x80000000)
-  ; CHECK:   [[MVE_VCMPi8_:%[0-9]+]]:vccr = MVE_VCMPi8 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi8_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi8_:%[0-9]+]]:vccr = MVE_VCMPi8 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi8_]], 0, $noreg, $noreg
   ; CHECK: bb.5:
   ; CHECK:   successors: %bb.6(0x80000000)
-  ; CHECK:   [[MVE_VCMPs16_:%[0-9]+]]:vccr = MVE_VCMPs16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs16_:%[0-9]+]]:vccr = MVE_VCMPs16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs16_]], 0, $noreg, $noreg
   ; CHECK: bb.6:
   ; CHECK:   successors: %bb.7(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
   ; CHECK: bb.7:
   ; CHECK:   successors: %bb.8(0x80000000)
-  ; CHECK:   [[MVE_VCMPs8_:%[0-9]+]]:vccr = MVE_VCMPs8 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs8_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs8_:%[0-9]+]]:vccr = MVE_VCMPs8 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs8_]], 0, $noreg, $noreg
   ; CHECK: bb.8:
   ; CHECK:   successors: %bb.9(0x80000000)
-  ; CHECK:   [[MVE_VCMPu16_:%[0-9]+]]:vccr = MVE_VCMPu16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu16_:%[0-9]+]]:vccr = MVE_VCMPu16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu16_]], 0, $noreg, $noreg
   ; CHECK: bb.9:
   ; CHECK:   successors: %bb.10(0x80000000)
-  ; CHECK:   [[MVE_VCMPu32_:%[0-9]+]]:vccr = MVE_VCMPu32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT9:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu32_:%[0-9]+]]:vccr = MVE_VCMPu32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT9:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu32_]], 0, $noreg, $noreg
   ; CHECK: bb.10:
   ; CHECK:   successors: %bb.11(0x80000000)
-  ; CHECK:   [[MVE_VCMPu8_:%[0-9]+]]:vccr = MVE_VCMPu8 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT10:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu8_:%[0-9]+]]:vccr = MVE_VCMPu8 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT10:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8_]], 0, $noreg, $noreg
   ; CHECK: bb.11:
   ; CHECK:   successors: %bb.12(0x80000000)
-  ; CHECK:   [[MVE_VCMPf16r:%[0-9]+]]:vccr = MVE_VCMPf16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT11:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf16r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPf16r:%[0-9]+]]:vccr = MVE_VCMPf16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT11:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf16r]], 0, $noreg, $noreg
   ; CHECK: bb.12:
   ; CHECK:   successors: %bb.13(0x80000000)
-  ; CHECK:   [[MVE_VCMPf32r:%[0-9]+]]:vccr = MVE_VCMPf32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT12:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf32r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPf32r:%[0-9]+]]:vccr = MVE_VCMPf32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT12:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf32r]], 0, $noreg, $noreg
   ; CHECK: bb.13:
   ; CHECK:   successors: %bb.14(0x80000000)
-  ; CHECK:   [[MVE_VCMPi16r:%[0-9]+]]:vccr = MVE_VCMPi16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT13:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi16r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi16r:%[0-9]+]]:vccr = MVE_VCMPi16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT13:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi16r]], 0, $noreg, $noreg
   ; CHECK: bb.14:
   ; CHECK:   successors: %bb.15(0x80000000)
-  ; CHECK:   [[MVE_VCMPi32r:%[0-9]+]]:vccr = MVE_VCMPi32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT14:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi32r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi32r:%[0-9]+]]:vccr = MVE_VCMPi32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT14:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi32r]], 0, $noreg, $noreg
   ; CHECK: bb.15:
   ; CHECK:   successors: %bb.16(0x80000000)
-  ; CHECK:   [[MVE_VCMPi8r:%[0-9]+]]:vccr = MVE_VCMPi8r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT15:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi8r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi8r:%[0-9]+]]:vccr = MVE_VCMPi8r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT15:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi8r]], 0, $noreg, $noreg
   ; CHECK: bb.16:
   ; CHECK:   successors: %bb.17(0x80000000)
-  ; CHECK:   [[MVE_VCMPs16r:%[0-9]+]]:vccr = MVE_VCMPs16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT16:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs16r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs16r:%[0-9]+]]:vccr = MVE_VCMPs16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT16:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs16r]], 0, $noreg, $noreg
   ; CHECK: bb.17:
   ; CHECK:   successors: %bb.18(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32r:%[0-9]+]]:vccr = MVE_VCMPs32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT17:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs32r:%[0-9]+]]:vccr = MVE_VCMPs32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT17:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32r]], 0, $noreg, $noreg
   ; CHECK: bb.18:
   ; CHECK:   successors: %bb.19(0x80000000)
-  ; CHECK:   [[MVE_VCMPs8r:%[0-9]+]]:vccr = MVE_VCMPs8r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT18:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs8r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs8r:%[0-9]+]]:vccr = MVE_VCMPs8r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT18:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs8r]], 0, $noreg, $noreg
   ; CHECK: bb.19:
   ; CHECK:   successors: %bb.20(0x80000000)
-  ; CHECK:   [[MVE_VCMPu16r:%[0-9]+]]:vccr = MVE_VCMPu16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT19:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu16r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu16r:%[0-9]+]]:vccr = MVE_VCMPu16r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT19:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu16r]], 0, $noreg, $noreg
   ; CHECK: bb.20:
   ; CHECK:   successors: %bb.21(0x80000000)
-  ; CHECK:   [[MVE_VCMPu32r:%[0-9]+]]:vccr = MVE_VCMPu32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT20:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu32r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu32r:%[0-9]+]]:vccr = MVE_VCMPu32r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT20:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu32r]], 0, $noreg, $noreg
   ; CHECK: bb.21:
   ; CHECK:   successors: %bb.22(0x80000000)
-  ; CHECK:   [[MVE_VCMPu8r:%[0-9]+]]:vccr = MVE_VCMPu8r %1:mqpr, %25:gprwithzr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT21:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8r]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu8r:%[0-9]+]]:vccr = MVE_VCMPu8r %1:mqpr, %25:gprwithzr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT21:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8r]], 0, $noreg, $noreg
   ; CHECK: bb.22:
-  ; CHECK:   [[MVE_VCMPu8r1:%[0-9]+]]:vccr = MVE_VCMPu8r %1:mqpr, $zr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT22:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8r1]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu8r1:%[0-9]+]]:vccr = MVE_VCMPu8r %1:mqpr, $zr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT22:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8r1]], 0, $noreg, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that VCMPs with an opposite condition are correctly converted into VPNOTs.
   ;
   bb.0:
-    %3:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %4:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %3:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %4:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.1:
-    %5:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %6:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %5:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %6:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.2:
-    %7:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %8:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %7:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %8:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.3:
-    %9:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %10:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %9:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %10:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.4:
-    %11:vccr = MVE_VCMPi8 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %12:vccr = MVE_VCMPi8 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %11:vccr = MVE_VCMPi8 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %12:vccr = MVE_VCMPi8 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.5:
-    %13:vccr = MVE_VCMPs16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %14:vccr = MVE_VCMPs16 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %13:vccr = MVE_VCMPs16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %14:vccr = MVE_VCMPs16 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.6:
-    %15:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %16:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %15:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %16:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.7:
-    %17:vccr = MVE_VCMPs8 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %18:vccr = MVE_VCMPs8 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %17:vccr = MVE_VCMPs8 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %18:vccr = MVE_VCMPs8 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.8:
-    %19:vccr = MVE_VCMPu16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %20:vccr = MVE_VCMPu16 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %19:vccr = MVE_VCMPu16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %20:vccr = MVE_VCMPu16 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.9:
-    %21:vccr = MVE_VCMPu32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %22:vccr = MVE_VCMPu32 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %21:vccr = MVE_VCMPu32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %22:vccr = MVE_VCMPu32 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.10:
-    %23:vccr = MVE_VCMPu8 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %24:vccr = MVE_VCMPu8 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %23:vccr = MVE_VCMPu8 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %24:vccr = MVE_VCMPu8 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
 
   bb.11:
-    %25:vccr = MVE_VCMPf16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %26:vccr = MVE_VCMPf16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %25:vccr = MVE_VCMPf16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %26:vccr = MVE_VCMPf16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.12:
-    %27:vccr = MVE_VCMPf32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %28:vccr = MVE_VCMPf32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %27:vccr = MVE_VCMPf32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %28:vccr = MVE_VCMPf32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.13:
-    %29:vccr = MVE_VCMPi16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %30:vccr = MVE_VCMPi16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %29:vccr = MVE_VCMPi16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %30:vccr = MVE_VCMPi16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.14:
-    %31:vccr = MVE_VCMPi32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %32:vccr = MVE_VCMPi32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %31:vccr = MVE_VCMPi32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %32:vccr = MVE_VCMPi32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.15:
-    %33:vccr = MVE_VCMPi8r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %34:vccr = MVE_VCMPi8r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %33:vccr = MVE_VCMPi8r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %34:vccr = MVE_VCMPi8r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.16:
-    %35:vccr = MVE_VCMPs16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %36:vccr = MVE_VCMPs16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %35:vccr = MVE_VCMPs16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %36:vccr = MVE_VCMPs16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.17:
-    %37:vccr = MVE_VCMPs32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %38:vccr = MVE_VCMPs32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %37:vccr = MVE_VCMPs32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %38:vccr = MVE_VCMPs32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.18:
-    %39:vccr = MVE_VCMPs8r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %40:vccr = MVE_VCMPs8r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %39:vccr = MVE_VCMPs8r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %40:vccr = MVE_VCMPs8r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.19:
-    %41:vccr = MVE_VCMPu16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %42:vccr = MVE_VCMPu16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %41:vccr = MVE_VCMPu16r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %42:vccr = MVE_VCMPu16r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.20:
-    %43:vccr = MVE_VCMPu32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %44:vccr = MVE_VCMPu32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %43:vccr = MVE_VCMPu32r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %44:vccr = MVE_VCMPu32r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.21:
-    %45:vccr = MVE_VCMPu8r %0:mqpr, %2:gprwithzr, 10, 0, $noreg
-    %46:vccr = MVE_VCMPu8r %0:mqpr, %2:gprwithzr, 11, 0, $noreg
+    %45:vccr = MVE_VCMPu8r %0:mqpr, %2:gprwithzr, 10, 0, $noreg, $noreg
+    %46:vccr = MVE_VCMPu8r %0:mqpr, %2:gprwithzr, 11, 0, $noreg, $noreg
 
   bb.22:
     ; There shouldn't be any exception for $zr, so the second VCMP should
     ; be transformed into a VPNOT.
-    %47:vccr = MVE_VCMPu8r %0:mqpr, $zr, 10, 0, $noreg
-    %48:vccr = MVE_VCMPu8r %0:mqpr, $zr, 11, 0, $noreg
+    %47:vccr = MVE_VCMPu8r %0:mqpr, $zr, 10, 0, $noreg, $noreg
+    %48:vccr = MVE_VCMPu8r %0:mqpr, $zr, 11, 0, $noreg, $noreg
 
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
@@ -287,79 +287,79 @@ body:             |
   ; CHECK-LABEL: name: vcmp_with_opposite_cond_and_swapped_operands
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPi16_:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi16_:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi16_]], 0, $noreg, $noreg
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.2(0x80000000)
-  ; CHECK:   [[MVE_VCMPi32_:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi32_:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi32_]], 0, $noreg, $noreg
   ; CHECK: bb.2:
   ; CHECK:   successors: %bb.3(0x80000000)
-  ; CHECK:   [[MVE_VCMPi8_:%[0-9]+]]:vccr = MVE_VCMPi8 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi8_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPi8_:%[0-9]+]]:vccr = MVE_VCMPi8 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPi8_]], 0, $noreg, $noreg
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.4(0x80000000)
-  ; CHECK:   [[MVE_VCMPs16_:%[0-9]+]]:vccr = MVE_VCMPs16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs16_:%[0-9]+]]:vccr = MVE_VCMPs16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs16_]], 0, $noreg, $noreg
   ; CHECK: bb.4:
   ; CHECK:   successors: %bb.5(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
   ; CHECK: bb.5:
   ; CHECK:   successors: %bb.6(0x80000000)
-  ; CHECK:   [[MVE_VCMPs8_:%[0-9]+]]:vccr = MVE_VCMPs8 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs8_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPs8_:%[0-9]+]]:vccr = MVE_VCMPs8 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs8_]], 0, $noreg, $noreg
   ; CHECK: bb.6:
   ; CHECK:   successors: %bb.7(0x80000000)
-  ; CHECK:   [[MVE_VCMPu16_:%[0-9]+]]:vccr = MVE_VCMPu16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu16_:%[0-9]+]]:vccr = MVE_VCMPu16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu16_]], 0, $noreg, $noreg
   ; CHECK: bb.7:
   ; CHECK:   successors: %bb.8(0x80000000)
-  ; CHECK:   [[MVE_VCMPu32_:%[0-9]+]]:vccr = MVE_VCMPu32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu32_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu32_:%[0-9]+]]:vccr = MVE_VCMPu32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu32_]], 0, $noreg, $noreg
   ; CHECK: bb.8:
-  ; CHECK:   [[MVE_VCMPu8_:%[0-9]+]]:vccr = MVE_VCMPu8 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPu8_:%[0-9]+]]:vccr = MVE_VCMPu8 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPu8_]], 0, $noreg, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that VCMPs with an opposite condition and swapped operands are
   ; correctly converted into VPNOTs.
   ;
   bb.0:
-    %2:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VCMPi16 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %2:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VCMPi16 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.1:
-    %4:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %5:vccr = MVE_VCMPi32 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %4:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %5:vccr = MVE_VCMPi32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.2:
-    %6:vccr = MVE_VCMPi8 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %7:vccr = MVE_VCMPi8 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %6:vccr = MVE_VCMPi8 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %7:vccr = MVE_VCMPi8 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.3:
-    %8:vccr = MVE_VCMPs16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %9:vccr = MVE_VCMPs16 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %8:vccr = MVE_VCMPs16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %9:vccr = MVE_VCMPs16 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.4:
-    %10:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %11:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %10:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %11:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.5:
-    %12:vccr = MVE_VCMPs8 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %13:vccr = MVE_VCMPs8 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %12:vccr = MVE_VCMPs8 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %13:vccr = MVE_VCMPs8 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.6:
-    %14:vccr = MVE_VCMPu16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %15:vccr = MVE_VCMPu16 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %14:vccr = MVE_VCMPu16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %15:vccr = MVE_VCMPu16 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.7:
-    %16:vccr = MVE_VCMPu32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %17:vccr = MVE_VCMPu32 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %16:vccr = MVE_VCMPu32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %17:vccr = MVE_VCMPu32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.8:
-    %18:vccr = MVE_VCMPu8 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %19:vccr = MVE_VCMPu8 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %18:vccr = MVE_VCMPu8 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %19:vccr = MVE_VCMPu8 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
@@ -373,13 +373,13 @@ body:             |
   ;
   bb.0:
     ; CHECK-LABEL: name: triple_vcmp
-    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-    ; CHECK: [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 12, 0, $noreg
+    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 12, 0, $noreg, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg
-    %4:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
+    %4:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -389,30 +389,30 @@ body:             |
   ; CHECK-LABEL: name: killed_vccr_values
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VCMPf16_]], undef [[MVE_VORR]]
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf16_]], 0, $noreg
+  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VCMPf16_]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPf16_]], 0, $noreg, $noreg
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.2(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT1]], undef [[MVE_VORR1]]
-  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT1]], 0, $noreg
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VPNOT2]], undef [[MVE_VORR2]]
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT1]], $noreg, undef [[MVE_VORR1]]
+  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT1]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VPNOT2]], $noreg, undef [[MVE_VORR2]]
   ; CHECK: bb.2:
   ; CHECK:   successors: %bb.3(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg
-  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT3]], undef [[MVE_VORR3]]
-  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT3]], 0, $noreg
-  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR3]], [[MVE_VORR3]], 1, [[MVE_VPNOT4]], undef [[MVE_VORR4]]
+  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT3]], $noreg, undef [[MVE_VORR3]]
+  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT3]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR3]], [[MVE_VORR3]], 1, [[MVE_VPNOT4]], $noreg, undef [[MVE_VORR4]]
   ; CHECK: bb.3:
-  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_2]], 0, $noreg
-  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT5]], undef [[MVE_VORR5]]
-  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT5]], 0, $noreg
-  ; CHECK:   [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 1, [[MVE_VPNOT6]], undef [[MVE_VORR6]]
-  ; CHECK:   [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VPNOT6]], undef [[MVE_VORR7]]
+  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_2]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT5]], $noreg, undef [[MVE_VORR5]]
+  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT5]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 1, [[MVE_VPNOT6]], $noreg, undef [[MVE_VORR6]]
+  ; CHECK:   [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VPNOT6]], $noreg, undef [[MVE_VORR7]]
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   bb.0:
     ;
@@ -420,38 +420,38 @@ body:             |
     ; second VCMP (that will be converted into a VPNOT) is found,
     ; the kill flag is removed.
     ;
-    %2:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1,  killed %2:vccr, undef %3:mqpr
-    %4:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 11, 0, $noreg
+    %2:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1,  killed %2:vccr, $noreg, undef %3:mqpr
+    %4:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
   bb.1:
     ;
     ; Tests that, if the result of the VCMP that has been replaced with a
     ; VPNOT is killed (before the insertion of the second VPNOT),
     ; the kill flag is removed.
     ;
-    %5:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %6:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg
-    %7:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, killed %6:vccr, undef %7:mqpr
-    %8:mqpr = MVE_VORR %7:mqpr, %7:mqpr, 1, %5:vccr, undef %8:mqpr
+    %5:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %6:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
+    %7:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, killed %6:vccr, $noreg, undef %7:mqpr
+    %8:mqpr = MVE_VORR %7:mqpr, %7:mqpr, 1, %5:vccr, $noreg, undef %8:mqpr
   bb.2:
     ;
     ; Tests that the kill flag is removed when inserting a VPNOT for
     ; an instruction.
     ;
-    %9:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %10:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg
-    %11:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %10:vccr, undef %11:mqpr
-    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 1, killed %9:vccr, undef %12:mqpr
+    %9:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %10:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
+    %11:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %10:vccr, $noreg, undef %11:mqpr
+    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 1, killed %9:vccr, $noreg, undef %12:mqpr
   bb.3:
     ;
     ; Tests that the kill flag is correctly removed when replacing a use
-    ; of the opposite VCCR value with the last VPNOT's result
+    ; of the opposite vccr, $noreg value with the last VPNOT's result
     ;
-    %13:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %14:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg
-    %15:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1,  %14:vccr, undef %15:mqpr
-    %16:mqpr = MVE_VORR %15:mqpr, %15:mqpr, 1, %13:vccr, undef %16:mqpr
-    %17:mqpr = MVE_VORR %16:mqpr, %16:mqpr, 1, killed %13:vccr, undef %17:mqpr
+    %13:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %14:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
+    %15:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1,  %14:vccr, $noreg, undef %15:mqpr
+    %16:mqpr = MVE_VORR %15:mqpr, %15:mqpr, 1, %13:vccr, $noreg, undef %16:mqpr
+    %17:mqpr = MVE_VORR %16:mqpr, %16:mqpr, 1, killed %13:vccr, $noreg, undef %17:mqpr
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -461,66 +461,66 @@ body:             |
   ; CHECK-LABEL: name: predicated_vcmps
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPi16_:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPi16_1:%[0-9]+]]:vccr = MVE_VCMPi16 %2:mqpr, %1:mqpr, 12, 1, [[MVE_VCMPi16_]]
-  ; CHECK:   [[MVE_VCMPi16_2:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi16_]]
+  ; CHECK:   [[MVE_VCMPi16_:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPi16_1:%[0-9]+]]:vccr = MVE_VCMPi16 %2:mqpr, %1:mqpr, 12, 1, [[MVE_VCMPi16_]], $noreg
+  ; CHECK:   [[MVE_VCMPi16_2:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi16_]], $noreg
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.2(0x80000000)
-  ; CHECK:   [[MVE_VCMPi32_:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPi32_1:%[0-9]+]]:vccr = MVE_VCMPi32 %2:mqpr, %1:mqpr, 12, 1, [[MVE_VCMPi32_]]
-  ; CHECK:   [[MVE_VCMPi32_2:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi32_]]
+  ; CHECK:   [[MVE_VCMPi32_:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPi32_1:%[0-9]+]]:vccr = MVE_VCMPi32 %2:mqpr, %1:mqpr, 12, 1, [[MVE_VCMPi32_]], $noreg
+  ; CHECK:   [[MVE_VCMPi32_2:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi32_]], $noreg
   ; CHECK: bb.2:
   ; CHECK:   successors: %bb.3(0x80000000)
-  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPf16_1:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPf16_]]
-  ; CHECK:   [[MVE_VCMPf16_2:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPf16_]]
+  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPf16_1:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPf16_]], $noreg
+  ; CHECK:   [[MVE_VCMPf16_2:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPf16_]], $noreg
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.4(0x80000000)
-  ; CHECK:   [[MVE_VCMPf32_:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPf32_1:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPf32_]]
-  ; CHECK:   [[MVE_VCMPf32_2:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPf32_]]
+  ; CHECK:   [[MVE_VCMPf32_:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPf32_1:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPf32_]], $noreg
+  ; CHECK:   [[MVE_VCMPf32_2:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPf32_]], $noreg
   ; CHECK: bb.4:
   ; CHECK:   successors: %bb.5(0x80000000)
-  ; CHECK:   [[MVE_VCMPi16_3:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPi16_4:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPi16_3]]
-  ; CHECK:   [[MVE_VCMPi16_5:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi16_3]]
+  ; CHECK:   [[MVE_VCMPi16_3:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPi16_4:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPi16_3]], $noreg
+  ; CHECK:   [[MVE_VCMPi16_5:%[0-9]+]]:vccr = MVE_VCMPi16 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi16_3]], $noreg
   ; CHECK: bb.5:
-  ; CHECK:   [[MVE_VCMPi32_3:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPi32_4:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPi32_3]]
-  ; CHECK:   [[MVE_VCMPi32_5:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi32_3]]
+  ; CHECK:   [[MVE_VCMPi32_3:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPi32_4:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 11, 1, [[MVE_VCMPi32_3]], $noreg
+  ; CHECK:   [[MVE_VCMPi32_5:%[0-9]+]]:vccr = MVE_VCMPi32 %1:mqpr, %2:mqpr, 10, 1, [[MVE_VCMPi32_3]], $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that predicated VCMPs are not replaced.
   ;
   bb.0:
-    %2:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VCMPi16 %1:mqpr, %0:mqpr, 12, 1, %2:vccr
-    %4:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 1, %2:vccr
+    %2:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VCMPi16 %1:mqpr, %0:mqpr, 12, 1, %2:vccr, $noreg
+    %4:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 1, %2:vccr, $noreg
 
   bb.1:
-    %5:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %6:vccr = MVE_VCMPi32 %1:mqpr, %0:mqpr, 12, 1, %5:vccr
-    %7:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 1, %5:vccr
+    %5:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %6:vccr = MVE_VCMPi32 %1:mqpr, %0:mqpr, 12, 1, %5:vccr, $noreg
+    %7:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 1, %5:vccr, $noreg
 
   bb.2:
-    %8:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %9:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 11, 1, %8:vccr
-    %10:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 1, %8:vccr
+    %8:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %9:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 11, 1, %8:vccr, $noreg
+    %10:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 1, %8:vccr, $noreg
 
   bb.3:
-    %11:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %12:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 11, 1, %11:vccr
-    %13:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 1, %11:vccr
+    %11:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %12:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 11, 1, %11:vccr, $noreg
+    %13:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 1, %11:vccr, $noreg
 
   bb.4:
-    %14:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %15:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 11, 1, %14:vccr
-    %16:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 1, %14:vccr
+    %14:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %15:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 11, 1, %14:vccr, $noreg
+    %16:vccr = MVE_VCMPi16 %0:mqpr, %1:mqpr, 10, 1, %14:vccr, $noreg
 
   bb.5:
-    %17:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %18:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 11, 1, %17:vccr
-    %19:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 1, %17:vccr
+    %17:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %18:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 11, 1, %17:vccr, $noreg
+    %19:vccr = MVE_VCMPi32 %0:mqpr, %1:mqpr, 10, 1, %17:vccr, $noreg
 
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
@@ -531,39 +531,39 @@ body:             |
   ; CHECK-LABEL: name: flt_with_swapped_operands
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPf16_1:%[0-9]+]]:vccr = MVE_VCMPf16 %2:mqpr, %1:mqpr, 12, 0, $noreg
+  ; CHECK:   [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPf16_1:%[0-9]+]]:vccr = MVE_VCMPf16 %2:mqpr, %1:mqpr, 12, 0, $noreg, $noreg
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.2(0x80000000)
-  ; CHECK:   [[MVE_VCMPf32_:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPf32_1:%[0-9]+]]:vccr = MVE_VCMPf32 %2:mqpr, %1:mqpr, 12, 0, $noreg
+  ; CHECK:   [[MVE_VCMPf32_:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPf32_1:%[0-9]+]]:vccr = MVE_VCMPf32 %2:mqpr, %1:mqpr, 12, 0, $noreg, $noreg
   ; CHECK: bb.2:
   ; CHECK:   successors: %bb.3(0x80000000)
-  ; CHECK:   [[MVE_VCMPf16_2:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPf16_3:%[0-9]+]]:vccr = MVE_VCMPf16 %2:mqpr, %1:mqpr, 11, 0, $noreg
+  ; CHECK:   [[MVE_VCMPf16_2:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPf16_3:%[0-9]+]]:vccr = MVE_VCMPf16 %2:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
   ; CHECK: bb.3:
-  ; CHECK:   [[MVE_VCMPf32_2:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPf32_3:%[0-9]+]]:vccr = MVE_VCMPf32 %2:mqpr, %1:mqpr, 11, 0, $noreg
+  ; CHECK:   [[MVE_VCMPf32_2:%[0-9]+]]:vccr = MVE_VCMPf32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPf32_3:%[0-9]+]]:vccr = MVE_VCMPf32 %2:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that float VCMPs with an opposite condition and swapped operands
   ; are not transformed into VPNOTs.
   ;
   bb.0:
-    %2:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VCMPf16 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %2:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VCMPf16 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.1:
-    %4:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %5:vccr = MVE_VCMPf32 %1:mqpr, %0:mqpr, 12, 0, $noreg
+    %4:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %5:vccr = MVE_VCMPf32 %1:mqpr, %0:mqpr, 12, 0, $noreg, $noreg
 
   bb.2:
-    %6:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %7:vccr = MVE_VCMPf16 %1:mqpr, %0:mqpr, 11, 0, $noreg
+    %6:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %7:vccr = MVE_VCMPf16 %1:mqpr, %0:mqpr, 11, 0, $noreg, $noreg
 
   bb.3:
-    %8:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %9:vccr = MVE_VCMPf32 %1:mqpr, %0:mqpr, 11, 0, $noreg
+    %8:vccr = MVE_VCMPf32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %9:vccr = MVE_VCMPf32 %1:mqpr, %0:mqpr, 11, 0, $noreg, $noreg
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -576,11 +576,11 @@ body:             |
   ;
   bb.0:
     ; CHECK-LABEL: name: 
diff erent_opcodes
-    ; CHECK: [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 0, 0, $noreg
-    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 1, 1, $noreg
+    ; CHECK: [[MVE_VCMPf16_:%[0-9]+]]:vccr = MVE_VCMPf16 %1:mqpr, %2:mqpr, 0, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 1, 1, $noreg, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
-    %2:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 0, 0, $noreg
-    %3:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 1, 1, $noreg
+    %2:vccr = MVE_VCMPf16 %0:mqpr, %1:mqpr, 0, 0, $noreg, $noreg
+    %3:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 1, 1, $noreg, $noreg
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -590,22 +590,22 @@ body:             |
   ; CHECK-LABEL: name: incorrect_condcode
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 11, 0, $noreg
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 11, 0, $noreg, $noreg
   ; CHECK: bb.1:
-  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VCMPs32_3:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 12, 0, $noreg
+  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VCMPs32_3:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 12, 0, $noreg, $noreg
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that a VCMP is not transformed into a VPNOT if its CondCode is not
   ; the opposite CondCode.
   ;
   bb.0:
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 11, 0, $noreg
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 11, 0, $noreg, $noreg
   bb.1:
-    %4:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %5:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 12, 0, $noreg
+    %4:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %5:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 12, 0, $noreg, $noreg
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -618,13 +618,13 @@ body:             |
   ;
   bb.0:
     ; CHECK-LABEL: name: vpr_or_vccr_write_between_vcmps
-    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 12, 0, $noreg
-    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT killed [[MVE_VCMPs32_]], 0, $noreg
-    ; CHECK: [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 10, 0, $noreg
+    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 12, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT killed [[MVE_VCMPs32_]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 12, 0, $noreg
-    %3:vccr = MVE_VPNOT killed %2:vccr, 0, $noreg
-    %4:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 10, 0, $noreg
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 12, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT killed %2:vccr, 0, $noreg, $noreg
+    %4:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 10, 0, $noreg, $noreg
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -634,122 +634,122 @@ body:             |
   ; CHECK-LABEL: name: spill_prevention
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], undef [[MVE_VORR]]
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT]], 0, $noreg
-  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT1]], undef [[MVE_VORR1]]
-  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT1]], 0, $noreg
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VPNOT2]], undef [[MVE_VORR2]]
-  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT2]], 0, $noreg
-  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR2]], [[MVE_VORR2]], 1, [[MVE_VPNOT3]], undef [[MVE_VORR3]]
-  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT3]], 0, $noreg
-  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR3]], [[MVE_VORR3]], 1, [[MVE_VPNOT4]], undef [[MVE_VORR4]]
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT1]], $noreg, undef [[MVE_VORR1]]
+  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT1]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VPNOT2]], $noreg, undef [[MVE_VORR2]]
+  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT2]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR2]], [[MVE_VORR2]], 1, [[MVE_VPNOT3]], $noreg, undef [[MVE_VORR3]]
+  ; CHECK:   [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT3]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR3]], [[MVE_VORR3]], 1, [[MVE_VPNOT4]], $noreg, undef [[MVE_VORR4]]
   ; CHECK: bb.1:
   ; CHECK:   successors: %bb.2(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg
-  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT5]], undef [[MVE_VORR5]]
-  ; CHECK:   [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 0, $noreg, undef [[MVE_VORR6]]
-  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT5]], 0, $noreg
-  ; CHECK:   [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VPNOT6]], undef [[MVE_VORR7]]
-  ; CHECK:   [[MVE_VORR8:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR7]], [[MVE_VORR7]], 0, $noreg, undef [[MVE_VORR8]]
-  ; CHECK:   [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT6]], 0, $noreg
-  ; CHECK:   [[MVE_VORR9:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR8]], [[MVE_VORR8]], 1, [[MVE_VPNOT7]], undef [[MVE_VORR9]]
+  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT5]], $noreg, undef [[MVE_VORR5]]
+  ; CHECK:   [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 0, $noreg, $noreg, undef [[MVE_VORR6]]
+  ; CHECK:   [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT5]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VPNOT6]], $noreg, undef [[MVE_VORR7]]
+  ; CHECK:   [[MVE_VORR8:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR7]], [[MVE_VORR7]], 0, $noreg, $noreg, undef [[MVE_VORR8]]
+  ; CHECK:   [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT6]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR9:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR8]], [[MVE_VORR8]], 1, [[MVE_VPNOT7]], $noreg, undef [[MVE_VORR9]]
   ; CHECK: bb.2:
   ; CHECK:   successors: %bb.3(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_2]], 0, $noreg
-  ; CHECK:   [[MVE_VORR10:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT8]], undef [[MVE_VORR10]]
-  ; CHECK:   [[MVE_VORR11:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR10]], [[MVE_VORR10]], 1, [[MVE_VPNOT8]], undef [[MVE_VORR11]]
-  ; CHECK:   [[MVE_VPNOT9:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT8]], 0, $noreg
-  ; CHECK:   [[MVE_VORR12:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR11]], [[MVE_VORR11]], 1, [[MVE_VPNOT9]], undef [[MVE_VORR12]]
-  ; CHECK:   [[MVE_VORR13:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR12]], [[MVE_VORR12]], 1, [[MVE_VPNOT9]], undef [[MVE_VORR13]]
-  ; CHECK:   [[MVE_VPNOT10:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT9]], 0, $noreg
-  ; CHECK:   [[MVE_VORR14:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR13]], [[MVE_VORR13]], 1, [[MVE_VPNOT10]], undef [[MVE_VORR14]]
-  ; CHECK:   [[MVE_VORR15:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR14]], [[MVE_VORR14]], 1, [[MVE_VPNOT10]], undef [[MVE_VORR15]]
-  ; CHECK:   [[MVE_VPNOT11:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT10]], 0, $noreg
-  ; CHECK:   [[MVE_VORR16:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR15]], [[MVE_VORR15]], 1, [[MVE_VPNOT11]], undef [[MVE_VORR16]]
-  ; CHECK:   [[MVE_VORR17:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR16]], [[MVE_VORR16]], 1, [[MVE_VPNOT11]], undef [[MVE_VORR17]]
+  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_2]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR10:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT8]], $noreg, undef [[MVE_VORR10]]
+  ; CHECK:   [[MVE_VORR11:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR10]], [[MVE_VORR10]], 1, [[MVE_VPNOT8]], $noreg, undef [[MVE_VORR11]]
+  ; CHECK:   [[MVE_VPNOT9:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT8]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR12:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR11]], [[MVE_VORR11]], 1, [[MVE_VPNOT9]], $noreg, undef [[MVE_VORR12]]
+  ; CHECK:   [[MVE_VORR13:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR12]], [[MVE_VORR12]], 1, [[MVE_VPNOT9]], $noreg, undef [[MVE_VORR13]]
+  ; CHECK:   [[MVE_VPNOT10:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT9]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR14:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR13]], [[MVE_VORR13]], 1, [[MVE_VPNOT10]], $noreg, undef [[MVE_VORR14]]
+  ; CHECK:   [[MVE_VORR15:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR14]], [[MVE_VORR14]], 1, [[MVE_VPNOT10]], $noreg, undef [[MVE_VORR15]]
+  ; CHECK:   [[MVE_VPNOT11:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT10]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR16:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR15]], [[MVE_VORR15]], 1, [[MVE_VPNOT11]], $noreg, undef [[MVE_VORR16]]
+  ; CHECK:   [[MVE_VORR17:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR16]], [[MVE_VORR16]], 1, [[MVE_VPNOT11]], $noreg, undef [[MVE_VORR17]]
   ; CHECK: bb.3:
   ; CHECK:   successors: %bb.4(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_3:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT12:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_3]], 0, $noreg
-  ; CHECK:   [[MVE_VORR18:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT12]], undef [[MVE_VORR11]]
-  ; CHECK:   [[MVE_VPNOT13:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT12]], 0, $noreg
-  ; CHECK:   [[MVE_VORR19:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT13]], undef [[MVE_VORR19]]
+  ; CHECK:   [[MVE_VCMPs32_3:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT12:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_3]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR18:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT12]], $noreg, undef [[MVE_VORR11]]
+  ; CHECK:   [[MVE_VPNOT13:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT12]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR19:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT13]], $noreg, undef [[MVE_VORR19]]
   ; CHECK: bb.4:
   ; CHECK:   [[VMSR_P0_:%[0-9]+]]:vccr = VMSR_P0 killed %32:gpr, 14 /* CC::al */, $noreg
-  ; CHECK:   [[MVE_VPNOT14:%[0-9]+]]:vccr = MVE_VPNOT [[VMSR_P0_]], 0, $noreg
-  ; CHECK:   [[MVE_VORR20:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR19]], [[MVE_VORR19]], 1, [[MVE_VPNOT14]], undef [[MVE_VORR20]]
-  ; CHECK:   [[MVE_VPNOT15:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT14]], 0, $noreg
-  ; CHECK:   [[MVE_VORR21:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR20]], [[MVE_VORR20]], 1, [[MVE_VPNOT15]], undef [[MVE_VORR21]]
-  ; CHECK:   [[MVE_VPNOT16:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT15]], 0, $noreg
-  ; CHECK:   [[MVE_VORR22:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR21]], [[MVE_VORR21]], 1, [[MVE_VPNOT16]], undef [[MVE_VORR22]]
-  ; CHECK:   [[MVE_VPNOT17:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT16]], 0, $noreg
-  ; CHECK:   [[MVE_VORR23:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR22]], [[MVE_VORR22]], 1, [[MVE_VPNOT17]], undef [[MVE_VORR23]]
-  ; CHECK:   [[MVE_VPNOT18:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT17]], 0, $noreg
-  ; CHECK:   [[MVE_VORR24:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR23]], [[MVE_VORR23]], 1, [[MVE_VPNOT18]], undef [[MVE_VORR24]]
+  ; CHECK:   [[MVE_VPNOT14:%[0-9]+]]:vccr = MVE_VPNOT [[VMSR_P0_]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR20:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR19]], [[MVE_VORR19]], 1, [[MVE_VPNOT14]], $noreg, undef [[MVE_VORR20]]
+  ; CHECK:   [[MVE_VPNOT15:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT14]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR21:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR20]], [[MVE_VORR20]], 1, [[MVE_VPNOT15]], $noreg, undef [[MVE_VORR21]]
+  ; CHECK:   [[MVE_VPNOT16:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT15]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR22:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR21]], [[MVE_VORR21]], 1, [[MVE_VPNOT16]], $noreg, undef [[MVE_VORR22]]
+  ; CHECK:   [[MVE_VPNOT17:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT16]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR23:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR22]], [[MVE_VORR22]], 1, [[MVE_VPNOT17]], $noreg, undef [[MVE_VORR23]]
+  ; CHECK:   [[MVE_VPNOT18:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT17]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR24:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR23]], [[MVE_VORR23]], 1, [[MVE_VPNOT18]], $noreg, undef [[MVE_VORR24]]
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   bb.0:
     ;
     ; Basic test case
     ;
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, undef %4:mqpr
-    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %2:vccr, undef %5:mqpr
-    %6:mqpr = MVE_VORR %5:mqpr, %5:mqpr, 1, %3:vccr, undef %6:mqpr
-    %7:mqpr = MVE_VORR %6:mqpr, %6:mqpr, 1, %2:vccr, undef %7:mqpr
-    %8:mqpr = MVE_VORR %7:mqpr, %7:mqpr, 1, %3:vccr, undef %8:mqpr
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, $noreg, undef %4:mqpr
+    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %2:vccr, $noreg, undef %5:mqpr
+    %6:mqpr = MVE_VORR %5:mqpr, %5:mqpr, 1, %3:vccr, $noreg, undef %6:mqpr
+    %7:mqpr = MVE_VORR %6:mqpr, %6:mqpr, 1, %2:vccr, $noreg, undef %7:mqpr
+    %8:mqpr = MVE_VORR %7:mqpr, %7:mqpr, 1, %3:vccr, $noreg, undef %8:mqpr
   bb.1:
     ;
     ; Tests that unpredicated instructions in the middle of the block
     ; don't interfere with the replacement.
     ;
-    %9:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %10:vccr = MVE_VPNOT %9:vccr, 0, $noreg
-    %11:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %10:vccr, undef %11:mqpr
-    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 0, $noreg, undef %12:mqpr
-    %13:mqpr = MVE_VORR %12:mqpr, %12:mqpr, 1, %9:vccr, undef %13:mqpr
-    %14:mqpr = MVE_VORR %13:mqpr, %13:mqpr, 0, $noreg, undef %14:mqpr
-    %15:mqpr = MVE_VORR %14:mqpr, %14:mqpr, 1, %10:vccr, undef %15:mqpr
+    %9:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %10:vccr = MVE_VPNOT %9:vccr, 0, $noreg, $noreg
+    %11:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %10:vccr, $noreg, undef %11:mqpr
+    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 0, $noreg, $noreg, undef %12:mqpr
+    %13:mqpr = MVE_VORR %12:mqpr, %12:mqpr, 1, %9:vccr, $noreg, undef %13:mqpr
+    %14:mqpr = MVE_VORR %13:mqpr, %13:mqpr, 0, $noreg, $noreg, undef %14:mqpr
+    %15:mqpr = MVE_VORR %14:mqpr, %14:mqpr, 1, %10:vccr, $noreg, undef %15:mqpr
   bb.2:
     ;
     ; Tests that all uses of the register are replaced, even when it's used
     ; multiple times in a row.
     ;
-    %16:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %17:vccr = MVE_VPNOT %16:vccr, 0, $noreg
-    %18:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %17:vccr, undef %18:mqpr
-    %19:mqpr = MVE_VORR %18:mqpr, %18:mqpr, 1, %17:vccr, undef %19:mqpr
-    %20:mqpr = MVE_VORR %19:mqpr, %19:mqpr, 1, %16:vccr, undef %20:mqpr
-    %21:mqpr = MVE_VORR %20:mqpr, %20:mqpr, 1, %16:vccr, undef %21:mqpr
-    %22:mqpr = MVE_VORR %21:mqpr, %21:mqpr, 1, %17:vccr, undef %22:mqpr
-    %23:mqpr = MVE_VORR %22:mqpr, %22:mqpr, 1, %17:vccr, undef %23:mqpr
-    %24:mqpr = MVE_VORR %23:mqpr, %23:mqpr, 1, %16:vccr, undef %24:mqpr
-    %25:mqpr = MVE_VORR %24:mqpr, %24:mqpr, 1, %16:vccr, undef %25:mqpr
+    %16:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %17:vccr = MVE_VPNOT %16:vccr, 0, $noreg, $noreg
+    %18:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %17:vccr, $noreg, undef %18:mqpr
+    %19:mqpr = MVE_VORR %18:mqpr, %18:mqpr, 1, %17:vccr, $noreg, undef %19:mqpr
+    %20:mqpr = MVE_VORR %19:mqpr, %19:mqpr, 1, %16:vccr, $noreg, undef %20:mqpr
+    %21:mqpr = MVE_VORR %20:mqpr, %20:mqpr, 1, %16:vccr, $noreg, undef %21:mqpr
+    %22:mqpr = MVE_VORR %21:mqpr, %21:mqpr, 1, %17:vccr, $noreg, undef %22:mqpr
+    %23:mqpr = MVE_VORR %22:mqpr, %22:mqpr, 1, %17:vccr, $noreg, undef %23:mqpr
+    %24:mqpr = MVE_VORR %23:mqpr, %23:mqpr, 1, %16:vccr, $noreg, undef %24:mqpr
+    %25:mqpr = MVE_VORR %24:mqpr, %24:mqpr, 1, %16:vccr, $noreg, undef %25:mqpr
   bb.3:
     ;
     ; Tests that already present VPNOTs are "registered" by the pass so
     ; it does not insert a useless VPNOT.
     ;
-    %26:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %27:vccr = MVE_VPNOT %26:vccr, 0, $noreg
-    %28:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %27:vccr, undef %19:mqpr
-    %29:vccr = MVE_VPNOT %27:vccr, 0, $noreg
-    %30:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %26:vccr, undef %30:mqpr
+    %26:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %27:vccr = MVE_VPNOT %26:vccr, 0, $noreg, $noreg
+    %28:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %27:vccr, $noreg, undef %19:mqpr
+    %29:vccr = MVE_VPNOT %27:vccr, 0, $noreg, $noreg
+    %30:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %26:vccr, $noreg, undef %30:mqpr
   bb.4:
     ;
     ; Tests that the pass works with instructions other than vcmp.
     ;
     %32:vccr = VMSR_P0 killed %31:gpr, 14, $noreg
-    %33:vccr = MVE_VPNOT %32:vccr, 0, $noreg
-    %34:mqpr = MVE_VORR %30:mqpr, %30:mqpr, 1, %33:vccr, undef %34:mqpr
-    %35:mqpr = MVE_VORR %34:mqpr, %34:mqpr, 1, %32:vccr, undef %35:mqpr
-    %36:mqpr = MVE_VORR %35:mqpr, %35:mqpr, 1, %33:vccr, undef %36:mqpr
-    %37:mqpr = MVE_VORR %36:mqpr, %36:mqpr, 1, %32:vccr, undef %37:mqpr
-    %38:mqpr = MVE_VORR %37:mqpr, %37:mqpr, 1, %33:vccr, undef %38:mqpr
+    %33:vccr = MVE_VPNOT %32:vccr, 0, $noreg, $noreg
+    %34:mqpr = MVE_VORR %30:mqpr, %30:mqpr, 1, %33:vccr, $noreg, undef %34:mqpr
+    %35:mqpr = MVE_VORR %34:mqpr, %34:mqpr, 1, %32:vccr, $noreg, undef %35:mqpr
+    %36:mqpr = MVE_VORR %35:mqpr, %35:mqpr, 1, %33:vccr, $noreg, undef %36:mqpr
+    %37:mqpr = MVE_VORR %36:mqpr, %36:mqpr, 1, %32:vccr, $noreg, undef %37:mqpr
+    %38:mqpr = MVE_VORR %37:mqpr, %37:mqpr, 1, %33:vccr, $noreg, undef %38:mqpr
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -761,51 +761,51 @@ body:             |
     ; Tests that multiple groups of predicated instructions in the same basic block are optimized.
     ;
     ; CHECK-LABEL: name: spill_prevention_multi
-    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-    ; CHECK: [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], undef [[MVE_VORR]]
-    ; CHECK: [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT]], 0, $noreg
-    ; CHECK: [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT1]], undef [[MVE_VORR1]]
-    ; CHECK: [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT1]], 0, $noreg
-    ; CHECK: [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VPNOT2]], undef [[MVE_VORR2]]
-    ; CHECK: [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT2]], 0, $noreg
-    ; CHECK: [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR2]], [[MVE_VORR2]], 1, [[MVE_VPNOT3]], undef [[MVE_VORR3]]
-    ; CHECK: [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-    ; CHECK: [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_1]], undef [[MVE_VORR4]]
-    ; CHECK: [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg
-    ; CHECK: [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR4]], [[MVE_VORR4]], 1, [[MVE_VPNOT4]], undef [[MVE_VORR5]]
-    ; CHECK: [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT4]], 0, $noreg
-    ; CHECK: [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 1, [[MVE_VPNOT5]], undef [[MVE_VORR6]]
-    ; CHECK: [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT5]], 0, $noreg
-    ; CHECK: [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VPNOT6]], undef [[MVE_VORR7]]
-    ; CHECK: [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-    ; CHECK: [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_2]], 0, $noreg
-    ; CHECK: [[MVE_VORR8:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT7]], undef [[MVE_VORR8]]
-    ; CHECK: [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT7]], 0, $noreg
-    ; CHECK: [[MVE_VORR9:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR8]], [[MVE_VORR8]], 1, [[MVE_VPNOT8]], undef [[MVE_VORR9]]
-    ; CHECK: [[MVE_VPNOT9:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT8]], 0, $noreg
-    ; CHECK: [[MVE_VORR10:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR9]], [[MVE_VORR9]], 1, [[MVE_VPNOT9]], undef [[MVE_VORR10]]
-    ; CHECK: [[MVE_VPNOT10:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT9]], 0, $noreg
-    ; CHECK: [[MVE_VORR11:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR10]], [[MVE_VORR10]], 1, [[MVE_VPNOT10]], undef [[MVE_VORR11]]
+    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR]]
+    ; CHECK: [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT1]], $noreg, undef [[MVE_VORR1]]
+    ; CHECK: [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT1]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VPNOT2]], $noreg, undef [[MVE_VORR2]]
+    ; CHECK: [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT2]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR2]], [[MVE_VORR2]], 1, [[MVE_VPNOT3]], $noreg, undef [[MVE_VORR3]]
+    ; CHECK: [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_1]], $noreg, undef [[MVE_VORR4]]
+    ; CHECK: [[MVE_VPNOT4:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR4]], [[MVE_VORR4]], 1, [[MVE_VPNOT4]], $noreg, undef [[MVE_VORR5]]
+    ; CHECK: [[MVE_VPNOT5:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT4]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 1, [[MVE_VPNOT5]], $noreg, undef [[MVE_VORR6]]
+    ; CHECK: [[MVE_VPNOT6:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT5]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VPNOT6]], $noreg, undef [[MVE_VORR7]]
+    ; CHECK: [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VPNOT7:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_2]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR8:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT7]], $noreg, undef [[MVE_VORR8]]
+    ; CHECK: [[MVE_VPNOT8:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT7]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR9:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR8]], [[MVE_VORR8]], 1, [[MVE_VPNOT8]], $noreg, undef [[MVE_VORR9]]
+    ; CHECK: [[MVE_VPNOT9:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT8]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR10:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR9]], [[MVE_VORR9]], 1, [[MVE_VPNOT9]], $noreg, undef [[MVE_VORR10]]
+    ; CHECK: [[MVE_VPNOT10:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT9]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR11:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR10]], [[MVE_VORR10]], 1, [[MVE_VPNOT10]], $noreg, undef [[MVE_VORR11]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, undef %4:mqpr
-    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %2:vccr, undef %5:mqpr
-    %6:mqpr = MVE_VORR %5:mqpr, %5:mqpr, 1, %3:vccr, undef %6:mqpr
-    %7:mqpr = MVE_VORR %6:mqpr, %6:mqpr, 1, %2:vccr, undef %7:mqpr
-    %8:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %9:vccr = MVE_VPNOT %8:vccr, 0, $noreg
-    %10:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %8:vccr, undef %10:mqpr
-    %11:mqpr = MVE_VORR %10:mqpr, %10:mqpr, 1, %9:vccr, undef %11:mqpr
-    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 1, %8:vccr, undef %12:mqpr
-    %13:mqpr = MVE_VORR %12:mqpr, %12:mqpr, 1, %9:vccr, undef %13:mqpr
-    %14:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %15:vccr = MVE_VPNOT %14:vccr, 0, $noreg
-    %16:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %15:vccr, undef %16:mqpr
-    %17:mqpr = MVE_VORR %16:mqpr, %16:mqpr, 1, %14:vccr, undef %17:mqpr
-    %18:mqpr = MVE_VORR %17:mqpr, %17:mqpr, 1, %15:vccr, undef %18:mqpr
-    %19:mqpr = MVE_VORR %18:mqpr, %18:mqpr, 1, %14:vccr, undef %19:mqpr
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, $noreg, undef %4:mqpr
+    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %2:vccr, $noreg, undef %5:mqpr
+    %6:mqpr = MVE_VORR %5:mqpr, %5:mqpr, 1, %3:vccr, $noreg, undef %6:mqpr
+    %7:mqpr = MVE_VORR %6:mqpr, %6:mqpr, 1, %2:vccr, $noreg, undef %7:mqpr
+    %8:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %9:vccr = MVE_VPNOT %8:vccr, 0, $noreg, $noreg
+    %10:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %8:vccr, $noreg, undef %10:mqpr
+    %11:mqpr = MVE_VORR %10:mqpr, %10:mqpr, 1, %9:vccr, $noreg, undef %11:mqpr
+    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 1, %8:vccr, $noreg, undef %12:mqpr
+    %13:mqpr = MVE_VORR %12:mqpr, %12:mqpr, 1, %9:vccr, $noreg, undef %13:mqpr
+    %14:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %15:vccr = MVE_VPNOT %14:vccr, 0, $noreg, $noreg
+    %16:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %15:vccr, $noreg, undef %16:mqpr
+    %17:mqpr = MVE_VORR %16:mqpr, %16:mqpr, 1, %14:vccr, $noreg, undef %17:mqpr
+    %18:mqpr = MVE_VORR %17:mqpr, %17:mqpr, 1, %15:vccr, $noreg, undef %18:mqpr
+    %19:mqpr = MVE_VORR %18:mqpr, %18:mqpr, 1, %14:vccr, $noreg, undef %19:mqpr
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -815,32 +815,32 @@ body:             |
   ; CHECK-LABEL: name: spill_prevention_predicated_vpnots
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 1, [[MVE_VCMPs32_]]
-  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_]], undef [[MVE_VORR]]
-  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT]], undef [[MVE_VORR1]]
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 1, [[MVE_VCMPs32_]], $noreg
+  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR1]]
   ; CHECK: bb.1:
-  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 1, [[MVE_VCMPs32_1]]
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VPNOT1]], undef [[MVE_VORR2]]
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_1]], undef [[MVE_VORR2]]
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR %2:mqpr, %1:mqpr, 1, [[MVE_VPNOT1]], undef [[MVE_VORR2]]
+  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 1, [[MVE_VCMPs32_1]], $noreg
+  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VPNOT1]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_1]], $noreg, undef [[MVE_VORR1]]
+  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %2:mqpr, %1:mqpr, 1, [[MVE_VPNOT1]], $noreg, undef %11:mqpr
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that predicated VPNOTs are not considered by this pass
   ; (This means that these examples should not be optimized.)
   ;
   bb.0:
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VPNOT %2:vccr, 1, %2:vccr
-    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %2:vccr, undef %4:mqpr
-    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %3:vccr, undef %5:mqpr
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT %2:vccr, 1, %2:vccr, $noreg
+    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %2:vccr, $noreg, undef %4:mqpr
+    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %3:vccr, $noreg, undef %5:mqpr
   bb.1:
-    %12:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %13:vccr = MVE_VPNOT %12:vccr, 1, %12:vccr
-    %14:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1, %13:vccr, undef %14:mqpr
-    %15:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %12:vccr, undef %15:mqpr
-    %16:mqpr = MVE_VORR %1:mqpr, %0:mqpr, 1, %13:vccr, undef %16:mqpr
+    %12:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %13:vccr = MVE_VPNOT %2:vccr, 1, %12:vccr, $noreg
+    %14:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1, %13:vccr, $noreg, undef %4:mqpr
+    %15:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %12:vccr, $noreg, undef %5:mqpr
+    %16:mqpr = MVE_VORR %1:mqpr, %0:mqpr, 1, %13:vccr, $noreg, undef %6:mqpr
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -853,19 +853,19 @@ body:             |
   ;
   bb.0:
     ; CHECK-LABEL: name: spill_prevention_copies
-    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-    ; CHECK: [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], undef [[MVE_VORR]]
-    ; CHECK: [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], undef [[MVE_VORR1]]
-    ; CHECK: [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], undef [[MVE_VORR2]]
+    ; CHECK: [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+    ; CHECK: [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+    ; CHECK: [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR]]
+    ; CHECK: [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR1]]
+    ; CHECK: [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR2]]
     ; CHECK: tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, undef %4:mqpr
-    %5:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %6:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %5:vccr, undef %6:mqpr
-    %7:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %8:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %7:vccr, undef %8:mqpr
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, $noreg, undef %4:mqpr
+    %5:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %6:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %5:vccr, $noreg, undef %6:mqpr
+    %7:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %8:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %7:vccr, $noreg, undef %8:mqpr
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -875,35 +875,35 @@ body:             |
   ; CHECK-LABEL: name: spill_prevention_vpnot_reordering
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VCMPs32_]], undef [[MVE_VORR]]
-  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR %2:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_]], undef [[MVE_VORR1]]
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR1]], 1, [[MVE_VPNOT]], undef [[MVE_VORR2]]
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VCMPs32_]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR %2:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_]], $noreg, undef [[MVE_VORR1]]
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR1]], 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR2]]
   ; CHECK: bb.1:
-  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VCMPs32_1]], undef [[MVE_VORR3]]
-  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %2:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_1]], undef [[MVE_VORR4]]
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg
-  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR3]], [[MVE_VORR4]], 1, [[MVE_VPNOT1]], undef [[MVE_VORR5]]
+  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %2:mqpr, 1, [[MVE_VCMPs32_1]], $noreg, undef [[MVE_VORR3]]
+  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %2:mqpr, %1:mqpr, 1, [[MVE_VCMPs32_1]], $noreg, undef [[MVE_VORR4]]
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR3]], [[MVE_VORR4]], 1, [[MVE_VPNOT1]], $noreg, undef [[MVE_VORR5]]
   ; CHECK:   tBX_RET 14 /* CC::al */, $noreg, implicit %1:mqpr
   ;
   ; Tests that the first VPNOT is moved down when the result of the VCMP is used
   ; before the first usage of the VPNOT's result.
   ;
   bb.0:
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %4:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1, %2:vccr, undef %4:mqpr
-    %5:mqpr = MVE_VORR %1:mqpr, %0:mqpr, 1, %2:vccr, undef %5:mqpr
-    %6:mqpr = MVE_VORR %4:mqpr, %5:mqpr, 1, %3:vccr, undef %6:mqpr
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %4:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1, %2:vccr, $noreg, undef %4:mqpr
+    %5:mqpr = MVE_VORR %1:mqpr, %0:mqpr, 1, %2:vccr, $noreg, undef %5:mqpr
+    %6:mqpr = MVE_VORR %4:mqpr, %5:mqpr, 1, %3:vccr, $noreg, undef %6:mqpr
   bb.1:
     ; Test again with a "killed" flag to check if it's properly removed.
-    %7:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %8:vccr = MVE_VPNOT %7:vccr, 0, $noreg
-    %9:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1, %7:vccr, undef %9:mqpr
-    %10:mqpr = MVE_VORR %1:mqpr, %0:mqpr, 1, killed %7:vccr, undef %10:mqpr
-    %11:mqpr = MVE_VORR %9:mqpr, %10:mqpr, 1, %8:vccr, undef %11:mqpr
+    %7:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %8:vccr = MVE_VPNOT %7:vccr, 0, $noreg, $noreg
+    %9:mqpr = MVE_VORR %0:mqpr, %1:mqpr, 1, %7:vccr, $noreg, undef %9:mqpr
+    %10:mqpr = MVE_VORR %1:mqpr, %0:mqpr, 1, killed %7:vccr, $noreg, undef %10:mqpr
+    %11:mqpr = MVE_VORR %9:mqpr, %10:mqpr, 1, %8:vccr, $noreg, undef %11:mqpr
     tBX_RET 14, $noreg, implicit %0:mqpr
 ...
 ---
@@ -913,43 +913,43 @@ body:             |
   ; CHECK-LABEL: name: spill_prevention_stop_after_write
   ; CHECK: bb.0:
   ; CHECK:   successors: %bb.1(0x80000000)
-  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg
-  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], undef [[MVE_VORR]]
-  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT]], 0, $noreg
-  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT1]], undef [[MVE_VORR1]]
+  ; CHECK:   [[MVE_VCMPs32_:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VPNOT1:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR1:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR]], [[MVE_VORR]], 1, [[MVE_VPNOT1]], $noreg, undef [[MVE_VORR1]]
   ; CHECK:   [[VMSR_P0_:%[0-9]+]]:vccr = VMSR_P0 killed %7:gpr, 14 /* CC::al */, $noreg
-  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VCMPs32_]], undef [[MVE_VORR2]]
-  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR2]], [[MVE_VORR2]], 1, [[MVE_VPNOT]], undef [[MVE_VORR3]]
+  ; CHECK:   [[MVE_VORR2:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR1]], [[MVE_VORR1]], 1, [[MVE_VCMPs32_]], $noreg, undef [[MVE_VORR2]]
+  ; CHECK:   [[MVE_VORR3:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR2]], [[MVE_VORR2]], 1, [[MVE_VPNOT]], $noreg, undef [[MVE_VORR3]]
   ; CHECK: bb.1:
-  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg
-  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT2]], undef [[MVE_VORR]]
-  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT2]], 0, $noreg
-  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR4]], [[MVE_VORR4]], 1, [[MVE_VPNOT3]], undef [[MVE_VORR5]]
-  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 10, 0, $noreg
-  ; CHECK:   [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 1, [[MVE_VPNOT2]], undef [[MVE_VORR6]]
-  ; CHECK:   [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VCMPs32_1]], undef [[MVE_VORR7]]
-  ; CHECK:   [[MVE_VORR8:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR7]], [[MVE_VORR7]], 1, [[MVE_VPNOT2]], undef [[MVE_VORR8]]
+  ; CHECK:   [[MVE_VCMPs32_1:%[0-9]+]]:vccr = MVE_VCMPs32 %1:mqpr, %2:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VPNOT2:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VCMPs32_1]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR4:%[0-9]+]]:mqpr = MVE_VORR %1:mqpr, %1:mqpr, 1, [[MVE_VPNOT2]], $noreg, undef [[MVE_VORR]]
+  ; CHECK:   [[MVE_VPNOT3:%[0-9]+]]:vccr = MVE_VPNOT [[MVE_VPNOT2]], 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR5:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR4]], [[MVE_VORR4]], 1, [[MVE_VPNOT3]], $noreg, undef [[MVE_VORR5]]
+  ; CHECK:   [[MVE_VCMPs32_2:%[0-9]+]]:vccr = MVE_VCMPs32 %2:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+  ; CHECK:   [[MVE_VORR6:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR5]], [[MVE_VORR5]], 1, [[MVE_VPNOT2]], $noreg, undef [[MVE_VORR6]]
+  ; CHECK:   [[MVE_VORR7:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR6]], [[MVE_VORR6]], 1, [[MVE_VCMPs32_1]], $noreg, undef [[MVE_VORR7]]
+  ; CHECK:   [[MVE_VORR8:%[0-9]+]]:mqpr = MVE_VORR [[MVE_VORR7]], [[MVE_VORR7]], 1, [[MVE_VPNOT2]], $noreg, undef [[MVE_VORR8]]
   ;
   ; Tests that the optimisation stops when it sees an instruction
   ; that writes to VPR, and that doesn't use any of the registers we care about.
   ;
   bb.0:
-    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg
-    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, undef %4:mqpr
-    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %2:vccr, undef %5:mqpr
+    %2:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %3:vccr = MVE_VPNOT %2:vccr, 0, $noreg, $noreg
+    %4:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %3:vccr, $noreg, undef %4:mqpr
+    %5:mqpr = MVE_VORR %4:mqpr, %4:mqpr, 1, %2:vccr, $noreg, undef %5:mqpr
     %6:vccr = VMSR_P0 killed %20:gpr, 14, $noreg
-    %7:mqpr = MVE_VORR %5:mqpr, %5:mqpr, 1, %2:vccr, undef %7:mqpr
-    %8:mqpr = MVE_VORR %7:mqpr, %7:mqpr, 1, %3:vccr, undef %8:mqpr
+    %7:mqpr = MVE_VORR %5:mqpr, %5:mqpr, 1, %2:vccr, $noreg, undef %7:mqpr
+    %8:mqpr = MVE_VORR %7:mqpr, %7:mqpr, 1, %3:vccr, $noreg, undef %8:mqpr
   bb.1:
-    %9:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg
-    %10:vccr = MVE_VPNOT %9:vccr, 0, $noreg
-    %11:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %10:vccr, undef %4:mqpr
-    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 1, %9:vccr, undef %12:mqpr
-    %13:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 10, 0, $noreg
-    %14:mqpr = MVE_VORR %12:mqpr, %12:mqpr, 1, %10:vccr, undef %14:mqpr
-    %15:mqpr = MVE_VORR %14:mqpr, %14:mqpr, 1, %9:vccr, undef %15:mqpr
-    %16:mqpr = MVE_VORR %15:mqpr, %15:mqpr, 1, %10:vccr, undef %16:mqpr
+    %9:vccr = MVE_VCMPs32 %0:mqpr, %1:mqpr, 10, 0, $noreg, $noreg
+    %10:vccr = MVE_VPNOT %9:vccr, 0, $noreg, $noreg
+    %11:mqpr = MVE_VORR %0:mqpr, %0:mqpr, 1, %10:vccr, $noreg, undef %4:mqpr
+    %12:mqpr = MVE_VORR %11:mqpr, %11:mqpr, 1, %9:vccr, $noreg, undef %12:mqpr
+    %13:vccr = MVE_VCMPs32 %1:mqpr, %0:mqpr, 10, 0, $noreg, $noreg
+    %14:mqpr = MVE_VORR %12:mqpr, %12:mqpr, 1, %10:vccr, $noreg, undef %14:mqpr
+    %15:mqpr = MVE_VORR %14:mqpr, %14:mqpr, 1, %9:vccr, $noreg, undef %15:mqpr
+    %16:mqpr = MVE_VORR %15:mqpr, %15:mqpr, 1, %10:vccr, $noreg, undef %16:mqpr
 ...

diff  --git a/llvm/test/CodeGen/Thumb2/mve-vpt-preuse.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-preuse.mir
index fe6f4a3f595d3..3499916bed912 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vpt-preuse.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-vpt-preuse.mir
@@ -64,19 +64,19 @@ body:             |
     ; CHECK-LABEL: name: vpt_preuse
     ; CHECK: successors: %bb.0(0x80000000)
     ; CHECK: liveins: $lr, $q0, $q1, $q2, $q3, $q4, $q5, $r0, $r1, $r2, $r7, $r8, $r9, $r10, $r11, $r12
-    ; CHECK: renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q5, 2, 0, $noreg
+    ; CHECK: renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q5, 2, 0, $noreg, $noreg
     ; CHECK: renamable $r4 = t2ADDrr renamable $r2, renamable $r10, 14 /* CC::al */, $noreg, $noreg
     ; CHECK: VSTR_P0_off renamable $vpr, $sp, 0, 14 /* CC::al */, $noreg
     ; CHECK: BUNDLE implicit-def $q6, implicit-def $d12, implicit-def $s24, implicit-def $s25, implicit-def $d13, implicit-def $s26, implicit-def $s27, implicit $vpr, implicit killed $r4 {
     ; CHECK:   MVE_VPST 8, implicit $vpr
-    ; CHECK:   renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr
+    ; CHECK:   renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg
     ; CHECK: }
     ; CHECK: t2LoopEnd renamable $lr, %bb.0, implicit-def dead $cpsr
     ; CHECK: t2B %bb.0, 14 /* CC::al */, $noreg
-  renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q5, 2, 0, $noreg
+  renamable $vpr = MVE_VCMPu32 renamable $q1, renamable $q5, 2, 0, $noreg, $noreg
   renamable $r4 = t2ADDrr renamable $r2, renamable $r10, 14, $noreg, $noreg
   VSTR_P0_off renamable $vpr, $sp, 0, 14, $noreg
-  renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr
+  renamable $q6 = MVE_VLDRBU32 killed renamable $r4, 0, 1, renamable $vpr, $noreg
   t2LoopEnd renamable $lr, %bb.0, implicit-def dead $cpsr
   t2B %bb.0, 14, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/mve-wls-block-placement.mir b/llvm/test/CodeGen/Thumb2/mve-wls-block-placement.mir
index 4854e1a53b963..53de31cf89384 100644
--- a/llvm/test/CodeGen/Thumb2/mve-wls-block-placement.mir
+++ b/llvm/test/CodeGen/Thumb2/mve-wls-block-placement.mir
@@ -623,8 +623,8 @@ body:             |
   ; CHECK: bb.5:
   ; CHECK:   successors: %bb.5(0x7c000000), %bb.1(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r2, $r3
-  ; CHECK:   renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 1, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 1, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   renamable $lr = t2LoopEndDec killed renamable $lr, %bb.5, implicit-def dead $cpsr
   ; CHECK:   t2B %bb.1, 14 /* CC::al */, $noreg
   ; CHECK: bb.6:
@@ -634,8 +634,8 @@ body:             |
   ; CHECK: bb.7:
   ; CHECK:   successors: %bb.7(0x7c000000), %bb.3(0x04000000)
   ; CHECK:   liveins: $lr, $r0, $r1, $r3
-  ; CHECK:   renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 2, 0, $noreg, undef renamable $q0
-  ; CHECK:   MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg
+  ; CHECK:   renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 2, 0, $noreg, $noreg, undef renamable $q0
+  ; CHECK:   MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg, $noreg
   ; CHECK:   renamable $lr = t2LoopEndDec killed renamable $lr, %bb.7, implicit-def dead $cpsr
   ; CHECK:   t2B %bb.3, 14 /* CC::al */, $noreg
   bb.0:
@@ -676,8 +676,8 @@ body:             |
     successors: %bb.4(0x7c000000), %bb.2(0x04000000)
     liveins: $lr, $r0, $r1, $r2, $r3
 
-    renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 1, 0, $noreg, undef renamable $q0
-    MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg
+    renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 1, 0, $noreg, $noreg, undef renamable $q0
+    MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.4, implicit-def dead $cpsr
     t2B %bb.2, 14 /* CC::al */, $noreg
 
@@ -698,8 +698,8 @@ body:             |
     successors: %bb.6(0x7c000000), %bb.7(0x04000000)
     liveins: $lr, $r0, $r1, $r3
 
-    renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 2, 0, $noreg, undef renamable $q0
-    MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg
+    renamable $q0, renamable $r0 = MVE_VIWDUPu16 killed renamable $r0, renamable $r3, 2, 0, $noreg, $noreg, undef renamable $q0
+    MVE_VSTRH16_rq undef renamable $q0, renamable $r1, killed renamable $q0, 0, $noreg, $noreg
     renamable $lr = t2LoopEndDec killed renamable $lr, %bb.6, implicit-def dead $cpsr
     t2B %bb.7, 14 /* CC::al */, $noreg
 

diff  --git a/llvm/test/CodeGen/Thumb2/phi_prevent_copy.mir b/llvm/test/CodeGen/Thumb2/phi_prevent_copy.mir
index e9e7492c0f1c1..1ca7f92fdc48e 100644
--- a/llvm/test/CodeGen/Thumb2/phi_prevent_copy.mir
+++ b/llvm/test/CodeGen/Thumb2/phi_prevent_copy.mir
@@ -50,10 +50,10 @@ body:             |
   ; CHECK:   [[COPY7:%[0-9]+]]:gprlr = COPY [[t2WhileLoopStartLR]]
   ; CHECK:   [[COPY8:%[0-9]+]]:rgpr = COPY [[COPY4]]
   ; CHECK:   [[COPY9:%[0-9]+]]:rgpr = COPY [[COPY3]]
-  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[COPY6]], 0, $noreg
+  ; CHECK:   [[MVE_VCTP8_:%[0-9]+]]:vccr = MVE_VCTP8 [[COPY6]], 0, $noreg, $noreg
   ; CHECK:   [[t2SUBri:%[0-9]+]]:rgpr = t2SUBri killed [[COPY6]], 16, 14 /* CC::al */, $noreg, $noreg
-  ; CHECK:   [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post killed [[COPY9]], 16, 1, [[MVE_VCTP8_]]
-  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post killed [[MVE_VLDRBU8_post1]], killed [[COPY8]], 16, 1, killed [[MVE_VCTP8_]]
+  ; CHECK:   [[MVE_VLDRBU8_post:%[0-9]+]]:rgpr, [[MVE_VLDRBU8_post1:%[0-9]+]]:mqpr = MVE_VLDRBU8_post killed [[COPY9]], 16, 1, [[MVE_VCTP8_]], $noreg
+  ; CHECK:   [[MVE_VSTRBU8_post:%[0-9]+]]:rgpr = MVE_VSTRBU8_post killed [[MVE_VLDRBU8_post1]], killed [[COPY8]], 16, 1, killed [[MVE_VCTP8_]], $noreg
   ; CHECK:   [[COPY10:%[0-9]+]]:rgpr = COPY [[MVE_VLDRBU8_post]]
   ; CHECK:   [[COPY10:%[0-9]+]]:rgpr = COPY [[MVE_VSTRBU8_post]]
   ; CHECK:   [[COPY10:%[0-9]+]]:rgpr = COPY [[t2SUBri]]
@@ -87,10 +87,10 @@ body:             |
     %9:rgpr = PHI %0, %bb.1, %10, %bb.2
     %11:gprlr = PHI %6, %bb.1, %12, %bb.2
     %13:rgpr = PHI %2, %bb.1, %14, %bb.2
-    %15:vccr = MVE_VCTP8 %13, 0, $noreg
+    %15:vccr = MVE_VCTP8 %13, 0, $noreg, $noreg
     %14:rgpr = t2SUBri killed %13, 16, 14 /* CC::al */, $noreg, $noreg
-    %8:rgpr, %16:mqpr = MVE_VLDRBU8_post killed %7, 16, 1, %15
-    %10:rgpr = MVE_VSTRBU8_post killed %16, killed %9, 16, 1, killed %15
+    %8:rgpr, %16:mqpr = MVE_VLDRBU8_post killed %7, 16, 1, %15, $noreg
+    %10:rgpr = MVE_VSTRBU8_post killed %16, killed %9, 16, 1, killed %15, $noreg
     %12:gprlr = t2LoopEndDec killed %11, %bb.2, implicit-def dead $cpsr
     t2B %bb.3, 14 /* CC::al */, $noreg
 


        


More information about the llvm-commits mailing list