[llvm] [MachineCopyPropagation] Detect and fix suboptimal instruction order to enable optimizations (PR #98087)

Gábor Spaits via llvm-commits llvm-commits at lists.llvm.org
Mon Jul 8 15:20:40 PDT 2024


https://github.com/spaits created https://github.com/llvm/llvm-project/pull/98087


## The issue
In the Coremark becnhmark the following code can be found in core_state.c's core_bench_state function (https://github.com/eembc/coremark/blob/d5fad6bd094899101a4e5fd53af7298160ced6ab/core_state.c#L61):
```c
for (i = 0; i < NUM_CORE_STATES; i++)
{
    crc = crcu32(final_counts[i], crc);
    crc = crcu32(track_counts[i], crc);
}
```
Let's have some code that can be compiled by itself that also reproduces the issue:
```c
void init_var(int *v);
int chain(int c, int n);

void start() {
    int a, b, c;

    init_var(&a);
    init_var(&b);
    init_var(&c);

    int r = chain(b, a);
    r = chain(c, r);
}
```
https://godbolt.org/z/7h8E9nG5q

Clang produces the following assembly for the section between the two function call (on aarch64/arm64):
```asm
bl      _Z5chainii
ldr     w8, [sp, #4]
mov     w1, w0
mov     w0, w8
bl      _Z5chainii
```
While GCC produces this assembly:
```asm
bl      _Z5chainii
mov     w1, w0
ldr     w0, [sp, 28]
bl      _Z5chainii
```
I think we shouldn't move these values around so much.

As you can see gcc does not "shuffle" the values from register to regsiter, but solves the "switching" with only two instruction.


This problem is also present for riscv and it is even worse.
Clang generates:
```asm
call    _Z5chainii
lw      a1, 0(sp)
mv      a2, a0
mv      a0, a1
mv      a1, a2
call    _Z5chainii
```
GCC generates:
```asm
call    _Z5chainii
mv      a1,a0
lw      a0,12(sp)
call    _Z5chainii
```
Also see on godbolt: https://godbolt.org/z/77rncrb3b

The dissasembly of coremark will also have this suboptimal pattern:
```asm
jal	0x8000f596 <crcu32>
lw	a1, 0x24(sp)
mv	a2, a0
mv	a0, a1
mv	a1, a2
jal	0x8000f596 <crcu32>
```

## The cause
The reason for the suboptimal code is the inabilty of copy propagation to find optimization due to suboptimal order of instructions. The suboptimal order of instructions introduces unnecesary data dependencies between instructions. (Let's say there is data dependency between MI A and MI B if there exists a register unit that is used or defined by both A and B.)

The reason for this data dependency in the examples above is that, the scheduler places loads as early as possible.
This is usually a good thing see https://discourse.llvm.org/t/how-to-copy-propagate-physical-register-introduced-before-ra/74828/4.

After creating the current patch and checking the regression test I found out, that the issue is more general, than the scheduker prioritizing loads. If you look at the test cases you can see that the current state of the patch also enables optimization that have nothing to do with loads, so there are other cases when unnecesary data dependencies block optimizations. (See the changes in llvm/test/CodeGen/RISCV/double-fcmp-strict.ll llvm/test/CodeGen/AArch64/sve-vector-interleave.ll and many more not load reletad improvements.)

## The current solution
The issue is more generic that subotimal data dependecies occuring when prioritizing loads, so I think adjusting the scheduler is not enough. Also this affects almost all the targets (see the tests).

I think the proper way to solve this issue is to make the machine copy propagation
"smarter", so in the cases where an unnecesary data dependency(ies) is(are) blocking an optimization
it is recignized and resolved by moving the instruction(s) without changing the semantics.

I have created a new struct that can administer the data dependencies of instructions in a tree.
Also added logic that uses this tree to find unnecesary data dependencies.

For example let's have the following MIR:
```llvm
0. r1 = LOAD sp
1. r2 = COPY r5
2. r3 = ADD r1 r4
3. r1 = COPY r2
```

Let's see what are the dependncies:
Instruction 3 has a dependency on instruction 2 since instruction two uses a value that is in r1 and r1 is re-defined by 3. This means that instruction 2 must preceed instruction 3.

Instruction 3 has a dependency on instruction 0. Both of these instructions define r1. instruction 0 must preceed instruction 3.

And finnaly instruction 2 has also a dependency on isntruction 0, since the value in r1 in the time of executiong instruction 2 is put into r1 in instruction 0.

>From the above, we can deduce, that instruction zero must preceed instruction 2 and instruction 2 must preceed isntruction 4.

We can also observe that instruction 3 has a dapendency on instruction 1. Basically in 1 and 3 we are just moving these values around. Could we erase this moving around and instantly do r1 = COPY r5? We cant just do that, since as seen before instruction 2 must preceed any redefinition of r1. So in the current case this optimization is blocked by instruction 2.

What if we have switched instruction 2 ad instruction 1 they dont have dependency on eachother. Then we will get this:
```llvm
0. r1 = LOAD sp
2. r3 = ADD r1 r4
1. r2 = COPY r5 ; instruction 1 and 2 switched places
3. r1 = COPY r2
```

As we can see, the required order of instruction 0 2 and 3 are still maintained. The semantics of the program has stayed the same.
We can also see that the previously disabled optimization became vailble, so the machine copy propagation pass can do the following modfication legally:
```llvm
0. r1 = LOAD sp
2. r3 = ADD r1 r4
1. r1 = COPY r5
3. r1 = COPY r2
```

So later on it can be recognised that one instruction is unnecesary (instruction 3) and erase it, so we can end up with more efficent code like this:
```llvm
0. r1 = LOAD sp
2. r3 = ADD r1 r4
1. r1 = COPY r5
```

## Why the current soulution is the best
Suboptimal order of instructions that cause copy propagation optimizations to be missed can appear for many different reasons on almost all the targets.
For this reason, I think the best way to deal with them is in a general target independent analysis of the data dependencies between instructions that are relevant to copy propagation.  

## Limitations
Dependency trees can be complicated. An instruction can have multiple dependencies that can also have multiple dependecies and so on.
Also these dependency trees can have intersections at different points. This extension might be unnecesary since I don't think that there are many cases where we have this complex data dependencies in MIR. (I have only found one in llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access-zve32x.ll when checking the MIR during debugging.)

If we want to extend the PR to handle any tree, then the correct order of the instructions in the dependency tree can be calculated with
in order traversal of the dependncy graph (and also some merging based on the MI positions in the basic block).

## The current state of the PR
I have tried to check all the tests, but I could not decide if the code is actually correct everywhere. I think in the most places it is correct. I would be really glad if you could also check the tests.

There are some regression tests that are failing. These are hand written tests, whose resulst can not be generated by update llc or mir scripts. If ypu see a chance that this PR may get approved and merged I will fix those tests.

## Conclusion
I would like to ask your opinion on this topic. Is it a good direction?
I would be really happy to finish this work, since I enjoy creating stuff like this, if you think it is good direction and could be merged when it is in consistent state.

Do you have another solution in mind to optimize enable optimization in those cases, where unnecesary data dependencies are blocking it?

If you think that this approach is good and I shall continue I will write tests for it and also fix the hand written tests.

Thank you for checking the PR and giving me feedback.

>From 795bcd3d35efd638622996536e5673c2092d5d33 Mon Sep 17 00:00:00 2001
From: Gabor Spaits <Gabor.Spaits at hightec-rt.com>
Date: Tue, 9 Jul 2024 00:13:56 +0200
Subject: [PATCH] [MachineCopyPropagation] Detect and fix suboptimal
 instruction order

---
 llvm/lib/CodeGen/MachineCopyPropagation.cpp   | 238 ++++++++++++-
 .../AArch64/GlobalISel/arm64-atomic.ll        |   3 +-
 .../GlobalISel/merge-stores-truncating.ll     |   3 +-
 llvm/test/CodeGen/AArch64/aarch64-wide-mul.ll |  10 +-
 llvm/test/CodeGen/AArch64/addp-shuffle.ll     |   3 +-
 llvm/test/CodeGen/AArch64/cgp-usubo.ll        |  15 +-
 llvm/test/CodeGen/AArch64/machine-cp.mir      | 215 ++++++++++++
 llvm/test/CodeGen/AArch64/neon-extmul.ll      |  10 +-
 llvm/test/CodeGen/AArch64/shufflevector.ll    |  17 +-
 .../streaming-compatible-memory-ops.ll        |   5 +-
 .../AArch64/sve-vector-deinterleave.ll        |  16 +-
 .../CodeGen/AArch64/sve-vector-interleave.ll  |   6 +-
 llvm/test/CodeGen/AArch64/vec_umulo.ll        |  11 +-
 llvm/test/CodeGen/AArch64/vselect-ext.ll      |  12 +-
 .../atomic_optimizations_pixelshader.ll       |  16 +-
 .../CodeGen/AMDGPU/vector_shuffle.packed.ll   | 155 ++++----
 llvm/test/CodeGen/LoongArch/tail-calls.ll     |   5 +-
 llvm/test/CodeGen/LoongArch/vector-fp-imm.ll  |   9 +-
 llvm/test/CodeGen/Mips/llvm-ir/or.ll          | 102 +++---
 llvm/test/CodeGen/Mips/llvm-ir/xor.ll         |  31 +-
 .../CodeGen/PowerPC/BreakableToken-reduced.ll |   5 +-
 .../test/CodeGen/PowerPC/atomics-i128-ldst.ll |   8 +-
 .../CodeGen/PowerPC/legalize-invert-br_cc.ll  |   3 +-
 .../lower-intrinsics-afn-mass_notail.ll       |   6 +-
 .../lower-intrinsics-fast-mass_notail.ll      |   6 +-
 .../CodeGen/PowerPC/machine-backward-cp.mir   | 121 ++++---
 .../PowerPC/stack-restore-with-setjmp.ll      |   3 +-
 llvm/test/CodeGen/RISCV/alu64.ll              |   3 +-
 llvm/test/CodeGen/RISCV/condops.ll            |  12 +-
 llvm/test/CodeGen/RISCV/double-fcmp-strict.ll |  36 +-
 llvm/test/CodeGen/RISCV/float-fcmp-strict.ll  |  18 +-
 llvm/test/CodeGen/RISCV/half-fcmp-strict.ll   |  18 +-
 llvm/test/CodeGen/RISCV/llvm.frexp.ll         |  18 +-
 llvm/test/CodeGen/RISCV/neg-abs.ll            |  10 +-
 .../RISCV/rv64-statepoint-call-lowering.ll    |   3 +-
 .../RISCV/rvv/constant-folding-crash.ll       |   6 +-
 llvm/test/CodeGen/RISCV/rvv/vmfeq.ll          |  18 +-
 llvm/test/CodeGen/RISCV/rvv/vmfge.ll          |  18 +-
 llvm/test/CodeGen/RISCV/rvv/vmfgt.ll          |  18 +-
 llvm/test/CodeGen/RISCV/rvv/vmfle.ll          |  18 +-
 llvm/test/CodeGen/RISCV/rvv/vmflt.ll          |  18 +-
 llvm/test/CodeGen/RISCV/rvv/vmfne.ll          |  18 +-
 llvm/test/CodeGen/RISCV/rvv/vmseq.ll          |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsge.ll          |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll         |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsgt.ll          |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll         |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsle.ll          |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsleu.ll         |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmslt.ll          |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsltu.ll         |  30 +-
 llvm/test/CodeGen/RISCV/rvv/vmsne.ll          |  30 +-
 .../CodeGen/RISCV/rvv/vsetvli-regression.ll   |   3 +-
 llvm/test/CodeGen/RISCV/shifts.ll             |   6 +-
 llvm/test/CodeGen/RISCV/srem-vector-lkk.ll    |  31 +-
 llvm/test/CodeGen/RISCV/tail-calls.ll         |  14 +-
 llvm/test/CodeGen/RISCV/urem-vector-lkk.ll    |  37 +-
 ...lar-shift-by-byte-multiple-legalization.ll |   6 +-
 .../RISCV/wide-scalar-shift-legalization.ll   |   6 +-
 llvm/test/CodeGen/SPARC/tailcall.ll           |  10 +-
 .../vector-constrained-fp-intrinsics.ll       | 330 ++++++------------
 llvm/test/CodeGen/Thumb2/mve-pred-ext.ll      |   6 +-
 llvm/test/CodeGen/Thumb2/mve-vmull-splat.ll   |   4 -
 llvm/test/CodeGen/X86/avx512-intrinsics.ll    |   6 +-
 llvm/test/CodeGen/X86/matrix-multiply.ll      |   5 +-
 llvm/test/CodeGen/X86/vec_saddo.ll            |  14 +-
 llvm/test/CodeGen/X86/vec_ssubo.ll            |   5 +-
 .../X86/vector-shuffle-combining-avx.ll       |   3 +-
 llvm/test/CodeGen/X86/xmulo.ll                | 126 +++----
 69 files changed, 1099 insertions(+), 1079 deletions(-)
 create mode 100644 llvm/test/CodeGen/AArch64/machine-cp.mir

diff --git a/llvm/lib/CodeGen/MachineCopyPropagation.cpp b/llvm/lib/CodeGen/MachineCopyPropagation.cpp
index bdc17e99d1fb07..de62f33c67cc4d 100644
--- a/llvm/lib/CodeGen/MachineCopyPropagation.cpp
+++ b/llvm/lib/CodeGen/MachineCopyPropagation.cpp
@@ -49,29 +49,36 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/DenseSet.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/SetVector.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/iterator_range.h"
+#include "llvm/Analysis/BlockFrequencyInfo.h"
 #include "llvm/CodeGen/MachineBasicBlock.h"
 #include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineOperand.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/Register.h"
 #include "llvm/CodeGen/TargetInstrInfo.h"
 #include "llvm/CodeGen/TargetRegisterInfo.h"
 #include "llvm/CodeGen/TargetSubtargetInfo.h"
 #include "llvm/InitializePasses.h"
+#include "llvm/MC/MCRegister.h"
 #include "llvm/MC/MCRegisterInfo.h"
 #include "llvm/Pass.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/DebugCounter.h"
+#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
+#include <algorithm>
 #include <cassert>
 #include <iterator>
+#include <utility>
 
 using namespace llvm;
 
@@ -105,7 +112,51 @@ static std::optional<DestSourcePair> isCopyInstr(const MachineInstr &MI,
   return std::nullopt;
 }
 
+static bool hasOverlaps(const MachineInstr &MI, Register &Reg,
+                        const TargetRegisterInfo *TRI) {
+  for (const MachineOperand &MI : MI.operands())
+    if (MI.isReg() && TRI->regsOverlap(Reg, MI.getReg()))
+      return true;
+  return false;
+}
+
+static bool twoMIsHaveMutualOperandRegisters(const MachineInstr &MI1,
+                                             const MachineInstr &MI2,
+                                             const TargetRegisterInfo *TRI) {
+  for (auto MI1OP : MI1.operands()) {
+    for (auto MI2OP : MI2.operands()) {
+      if (MI1OP.isReg() && MI2OP.isReg() &&
+          TRI->regsOverlap(MI1OP.getReg(), MI2OP.getReg())) {
+        return true;
+      }
+    }
+  }
+  return false;
+}
+
 class CopyTracker {
+  // When conducting backward copy propagation we may need to move instructions
+  // that are in the dependency tree in this case the relative order of the
+  // moved instructions to each other must not change. This variable is used to
+  // see where we are in the basic block and based.
+  int Time = 0;
+
+  // A tree representing dependencies between instructions.
+  struct Dependency {
+    Dependency() = default;
+    Dependency(MachineInstr *MI) : MI(MI) {}
+    MachineInstr *MI;
+
+    // When the MI appears. It is used to keep the MIs relative order to
+    // each other.
+    int MIPosition = -1;
+
+    // The children in the dependency tree. These are the instructions
+    // that work with at least one common register with the current instruction.
+    llvm::SmallVector<MachineInstr *> MustPrecede;
+    bool AnythingMustPrecede = false;
+  };
+
   struct CopyInfo {
     MachineInstr *MI, *LastSeenUseInCopy;
     SmallVector<MCRegister, 4> DefRegs;
@@ -113,9 +164,10 @@ class CopyTracker {
   };
 
   DenseMap<MCRegister, CopyInfo> Copies;
+  DenseMap<MachineInstr *, Dependency> Dependencies;
 
 public:
-  /// Mark all of the given registers and their subregisters as unavailable for
+  /// Mark all of the given registers and their sub registers as unavailable for
   /// copying.
   void markRegsUnavailable(ArrayRef<MCRegister> Regs,
                            const TargetRegisterInfo &TRI) {
@@ -129,6 +181,27 @@ class CopyTracker {
     }
   }
 
+  void moveBAfterA(MachineInstr *A, MachineInstr *B) {
+    llvm::MachineBasicBlock *MBB = A->getParent();
+    assert(MBB == B->getParent() &&
+           "Both instructions must be in the same MachineBasicBlock");
+    assert(Dependencies.contains(B) &&
+           "Shall contain the instruction that blocks the optimization.");
+    Dependencies[B].MIPosition = Time++;
+    MBB->remove(B);
+    MBB->insertAfter(--A->getIterator(), B);
+  }
+
+  // Add a new Dependency to the already existing ones.
+  void addDependency(Dependency B) {
+    if (Dependencies.contains(B.MI))
+      return;
+
+    B.MIPosition = Time++;
+    Dependencies.insert({B.MI, B});
+  }
+
+  /// Only called for backward propagation
   /// Remove register from copy maps.
   void invalidateRegister(MCRegister Reg, const TargetRegisterInfo &TRI,
                           const TargetInstrInfo &TII, bool UseCopyInstr) {
@@ -222,6 +295,95 @@ class CopyTracker {
     }
   }
 
+  void setDependenciesForMI(MachineInstr *MI, const TargetRegisterInfo &TRI,
+                            const TargetInstrInfo &TII, bool UseCopyInstr) {
+    bool Blocks = !Dependencies.contains(MI);
+    Dependency b{MI};
+    Dependency *CurrentBlocker = nullptr;
+    if (!Blocks) {
+      CurrentBlocker = &Dependencies[MI];
+    } else {
+      CurrentBlocker = &b;
+    }
+
+    for (const MachineOperand &Operand : MI->operands()) {
+      if (!Operand.isReg())
+        continue;
+
+      Register OpReg = Operand.getReg();
+      MCRegister OpMCReg = OpReg.asMCReg();
+      if (!OpMCReg)
+        continue;
+
+      // Invalidate those copies that are affected by the definition or usage of
+      // the current MI.
+      for (MCRegUnit UsedOPMcRegUnit : TRI.regunits(OpMCReg)) {
+        auto CopyThatDependsOnIt = Copies.find(UsedOPMcRegUnit);
+        // Do not take debug usages into account.
+        if (CopyThatDependsOnIt != Copies.end() &&
+            !(Operand.isUse() && Operand.isDebug())) {
+          Copies.erase(CopyThatDependsOnIt);
+        }
+      }
+
+      for (std::pair<MachineInstr *, Dependency> &Dep : Dependencies) {
+        assert(
+            Dep.first == Dep.second.MI &&
+            "Inconsistent state: The key and MI of a blocker do not match\n");
+        MachineInstr *DepMI = Dep.first;
+        if (DepMI == MI)
+          continue;
+        if (Operand.isUse() && Operand.isDebug())
+          continue;
+        if (!hasOverlaps(*DepMI, OpReg, &TRI))
+          continue;
+
+        // The current instruction precedes the instruction in the dependency
+        // tree.
+        if (CurrentBlocker->MIPosition == -1 ||
+            Dep.second.MIPosition < CurrentBlocker->MIPosition) {
+          Dep.second.MustPrecede.push_back(MI);
+          Dep.second.AnythingMustPrecede = true;
+          continue;
+        }
+
+        // The current instruction is preceeded the instruction in the
+        // dependency tree. This can happen when other instruction are moved
+        // before the currently analyzed one to optimize data dependencies.
+        if (CurrentBlocker->MIPosition != -1 &&
+            Dep.second.MIPosition > CurrentBlocker->MIPosition) {
+          CurrentBlocker->MustPrecede.push_back(DepMI);
+          CurrentBlocker->AnythingMustPrecede = true;
+          continue;
+        }
+      }
+    }
+
+    if (Blocks) {
+      addDependency(*CurrentBlocker);
+    }
+  }
+
+  std::optional<llvm::SmallVector<MachineInstr *>>
+  getFirstPreviousDependencies(MachineInstr *MI,
+                               const TargetRegisterInfo &TRI) {
+    if (Dependencies.contains(MI)) {
+      auto PrevDefs = Dependencies[MI].MustPrecede;
+      if (std::all_of(PrevDefs.begin(), PrevDefs.end(), [&](auto OneUse) {
+            if (!OneUse) {
+              return false;
+            }
+            return !(Dependencies[OneUse].AnythingMustPrecede) &&
+                   (Dependencies[OneUse].MustPrecede.size() == 0);
+          })) {
+
+        return Dependencies[MI].MustPrecede;
+      }
+      return {};
+    }
+    return {{}};
+  }
+
   /// Add this copy's registers into the tracker's copy maps.
   void trackCopy(MachineInstr *MI, const TargetRegisterInfo &TRI,
                  const TargetInstrInfo &TII, bool UseCopyInstr) {
@@ -245,6 +407,9 @@ class CopyTracker {
         Copy.DefRegs.push_back(Def);
       Copy.LastSeenUseInCopy = MI;
     }
+    
+    Dependency b{MI};
+    addDependency(b);
   }
 
   bool hasAnyCopies() {
@@ -263,12 +428,16 @@ class CopyTracker {
   }
 
   MachineInstr *findCopyDefViaUnit(MCRegister RegUnit,
-                                   const TargetRegisterInfo &TRI) {
+                                   const TargetRegisterInfo &TRI,
+                                   bool CanUseLastSeenInCopy = false) {
     auto CI = Copies.find(RegUnit);
     if (CI == Copies.end())
       return nullptr;
     if (CI->second.DefRegs.size() != 1)
       return nullptr;
+    if (CanUseLastSeenInCopy)
+      return CI->second.LastSeenUseInCopy;
+
     MCRegUnit RU = *TRI.regunits(CI->second.DefRegs[0]).begin();
     return findCopyForUnit(RU, TRI, true);
   }
@@ -278,7 +447,7 @@ class CopyTracker {
                                       const TargetInstrInfo &TII,
                                       bool UseCopyInstr) {
     MCRegUnit RU = *TRI.regunits(Reg).begin();
-    MachineInstr *AvailCopy = findCopyDefViaUnit(RU, TRI);
+    MachineInstr *AvailCopy = findCopyDefViaUnit(RU, TRI, true);
 
     if (!AvailCopy)
       return nullptr;
@@ -376,6 +545,7 @@ class CopyTracker {
 
   void clear() {
     Copies.clear();
+    Dependencies.clear();
   }
 };
 
@@ -1029,6 +1199,50 @@ void MachineCopyPropagation::propagateDefs(MachineInstr &MI) {
     if (hasOverlappingMultipleDef(MI, MODef, Def))
       continue;
 
+    LLVM_DEBUG(dbgs() << "Backward copy was found\n");
+    LLVM_DEBUG(Copy->dump());
+
+    // Let's see if we have any kind of previous dependencies for the copy,
+    // that have no other dependencies. (So the dependency tree is one level
+    // deep)
+    auto PreviousDependencies =
+        Tracker.getFirstPreviousDependencies(Copy, *TRI);
+
+    if (!PreviousDependencies)
+      // The dependency tree is more than one level deep.
+      continue;
+
+    LLVM_DEBUG(dbgs() << "Number of dependencies of the copy: "
+                      << PreviousDependencies->size() << "\n");
+    
+    // Check whether the current MI has a mutual registers with the optimization
+    // blocking instructions.
+    bool NoDependencyWithMI =
+        std::all_of(PreviousDependencies->begin(), PreviousDependencies->end(),
+                    [&](auto *MI1) {
+                      return !twoMIsHaveMutualOperandRegisters(*MI1, MI, TRI);
+                    });
+    if (!NoDependencyWithMI)
+      continue;
+    
+    // If there isn't any relationship between the blockers and the current MI
+    // then we can start moving them.
+
+    // Add the new instruction to the list of dependencies.
+    Tracker.addDependency({&MI});
+
+    // Then move the previous instruction before the current.
+    // Later the dependency analysis will run for the current MI. There
+    // the relationship between these moved instructions and the current
+    // MI is handled.
+    for (llvm::MachineInstr *I : llvm::reverse(*PreviousDependencies)) {
+      LLVM_DEBUG(dbgs() << "Moving ");
+      LLVM_DEBUG(I->dump());
+      LLVM_DEBUG(dbgs() << "Before ");
+      LLVM_DEBUG(MI.dump());
+      Tracker.moveBAfterA(&MI, I);
+    }
+
     LLVM_DEBUG(dbgs() << "MCP: Replacing " << printReg(MODef.getReg(), TRI)
                       << "\n     with " << printReg(Def, TRI) << "\n     in "
                       << MI << "     from " << *Copy);
@@ -1036,6 +1250,10 @@ void MachineCopyPropagation::propagateDefs(MachineInstr &MI) {
     MODef.setReg(Def);
     MODef.setIsRenamable(CopyOperands->Destination->isRenamable());
 
+    Tracker.invalidateRegister(MODef.getReg().asMCReg(), *TRI, *TII,
+                               UseCopyInstr);
+    Tracker.invalidateRegister(Def, *TRI, *TII, UseCopyInstr);
+
     LLVM_DEBUG(dbgs() << "MCP: After replacement: " << MI << "\n");
     MaybeDeadCopies.insert(Copy);
     Changed = true;
@@ -1060,10 +1278,7 @@ void MachineCopyPropagation::BackwardCopyPropagateBlock(
         // Unlike forward cp, we don't invoke propagateDefs here,
         // just let forward cp do COPY-to-COPY propagation.
         if (isBackwardPropagatableCopy(*CopyOperands, *MRI)) {
-          Tracker.invalidateRegister(SrcReg.asMCReg(), *TRI, *TII,
-                                     UseCopyInstr);
-          Tracker.invalidateRegister(DefReg.asMCReg(), *TRI, *TII,
-                                     UseCopyInstr);
+          Tracker.setDependenciesForMI(&MI, *TRI, *TII, UseCopyInstr); // Maybe it is bad to call it here
           Tracker.trackCopy(&MI, *TRI, *TII, UseCopyInstr);
           continue;
         }
@@ -1080,6 +1295,8 @@ void MachineCopyPropagation::BackwardCopyPropagateBlock(
       }
 
     propagateDefs(MI);
+    Tracker.setDependenciesForMI(&MI, *TRI, *TII, UseCopyInstr);
+
     for (const MachineOperand &MO : MI.operands()) {
       if (!MO.isReg())
         continue;
@@ -1087,10 +1304,6 @@ void MachineCopyPropagation::BackwardCopyPropagateBlock(
       if (!MO.getReg())
         continue;
 
-      if (MO.isDef())
-        Tracker.invalidateRegister(MO.getReg().asMCReg(), *TRI, *TII,
-                                   UseCopyInstr);
-
       if (MO.readsReg()) {
         if (MO.isDebug()) {
           //  Check if the register in the debug instruction is utilized
@@ -1101,9 +1314,6 @@ void MachineCopyPropagation::BackwardCopyPropagateBlock(
               CopyDbgUsers[Copy].insert(&MI);
             }
           }
-        } else {
-          Tracker.invalidateRegister(MO.getReg().asMCReg(), *TRI, *TII,
-                                     UseCopyInstr);
         }
       }
     }
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll
index b619aac709d985..21cc2e0e570776 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll
@@ -106,11 +106,10 @@ define i32 @val_compare_and_swap_from_load(ptr %p, i32 %cmp, ptr %pnew) #0 {
 ; CHECK-OUTLINE-O1-LABEL: val_compare_and_swap_from_load:
 ; CHECK-OUTLINE-O1:       ; %bb.0:
 ; CHECK-OUTLINE-O1-NEXT:    stp x29, x30, [sp, #-16]! ; 16-byte Folded Spill
-; CHECK-OUTLINE-O1-NEXT:    ldr w8, [x2]
 ; CHECK-OUTLINE-O1-NEXT:    mov x3, x0
 ; CHECK-OUTLINE-O1-NEXT:    mov w0, w1
+; CHECK-OUTLINE-O1-NEXT:    ldr w1, [x2]
 ; CHECK-OUTLINE-O1-NEXT:    mov x2, x3
-; CHECK-OUTLINE-O1-NEXT:    mov w1, w8
 ; CHECK-OUTLINE-O1-NEXT:    bl ___aarch64_cas4_acq
 ; CHECK-OUTLINE-O1-NEXT:    ldp x29, x30, [sp], #16 ; 16-byte Folded Reload
 ; CHECK-OUTLINE-O1-NEXT:    ret
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/merge-stores-truncating.ll b/llvm/test/CodeGen/AArch64/GlobalISel/merge-stores-truncating.ll
index 7fd71b26fa1ba7..c8f8361e5ef885 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/merge-stores-truncating.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/merge-stores-truncating.ll
@@ -256,9 +256,8 @@ define dso_local i32 @load_between_stores(i32 %x, ptr %p, ptr %ptr) {
 ; CHECK:       ; %bb.0:
 ; CHECK-NEXT:    strh w0, [x1]
 ; CHECK-NEXT:    lsr w9, w0, #16
-; CHECK-NEXT:    ldr w8, [x2]
+; CHECK-NEXT:    ldr w0, [x2]
 ; CHECK-NEXT:    strh w9, [x1, #2]
-; CHECK-NEXT:    mov w0, w8
 ; CHECK-NEXT:    ret
   %t1 = trunc i32 %x to i16
   %sh = lshr i32 %x, 16
diff --git a/llvm/test/CodeGen/AArch64/aarch64-wide-mul.ll b/llvm/test/CodeGen/AArch64/aarch64-wide-mul.ll
index 410c2d9021d6d5..a150a0f6ee40a2 100644
--- a/llvm/test/CodeGen/AArch64/aarch64-wide-mul.ll
+++ b/llvm/test/CodeGen/AArch64/aarch64-wide-mul.ll
@@ -131,13 +131,12 @@ entry:
 define <16 x i32> @mla_i32(<16 x i8> %a, <16 x i8> %b, <16 x i32> %c) {
 ; CHECK-SD-LABEL: mla_i32:
 ; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    umull2 v7.8h, v0.16b, v1.16b
 ; CHECK-SD-NEXT:    umull v6.8h, v0.8b, v1.8b
-; CHECK-SD-NEXT:    uaddw2 v5.4s, v5.4s, v7.8h
+; CHECK-SD-NEXT:    umull2 v7.8h, v0.16b, v1.16b
 ; CHECK-SD-NEXT:    uaddw v0.4s, v2.4s, v6.4h
 ; CHECK-SD-NEXT:    uaddw2 v1.4s, v3.4s, v6.8h
+; CHECK-SD-NEXT:    uaddw2 v3.4s, v5.4s, v7.8h
 ; CHECK-SD-NEXT:    uaddw v2.4s, v4.4s, v7.4h
-; CHECK-SD-NEXT:    mov v3.16b, v5.16b
 ; CHECK-SD-NEXT:    ret
 ;
 ; CHECK-GI-LABEL: mla_i32:
@@ -170,18 +169,17 @@ define <16 x i64> @mla_i64(<16 x i8> %a, <16 x i8> %b, <16 x i64> %c) {
 ; CHECK-SD-NEXT:    umull2 v0.8h, v0.16b, v1.16b
 ; CHECK-SD-NEXT:    ldp q20, q21, [sp]
 ; CHECK-SD-NEXT:    ushll v17.4s, v16.4h, #0
+; CHECK-SD-NEXT:    ushll v18.4s, v0.4h, #0
 ; CHECK-SD-NEXT:    ushll2 v16.4s, v16.8h, #0
 ; CHECK-SD-NEXT:    ushll2 v19.4s, v0.8h, #0
-; CHECK-SD-NEXT:    ushll v18.4s, v0.4h, #0
 ; CHECK-SD-NEXT:    uaddw2 v1.2d, v3.2d, v17.4s
 ; CHECK-SD-NEXT:    uaddw v0.2d, v2.2d, v17.2s
 ; CHECK-SD-NEXT:    uaddw2 v3.2d, v5.2d, v16.4s
 ; CHECK-SD-NEXT:    uaddw v2.2d, v4.2d, v16.2s
-; CHECK-SD-NEXT:    uaddw2 v16.2d, v21.2d, v19.4s
 ; CHECK-SD-NEXT:    uaddw v4.2d, v6.2d, v18.2s
 ; CHECK-SD-NEXT:    uaddw2 v5.2d, v7.2d, v18.4s
+; CHECK-SD-NEXT:    uaddw2 v7.2d, v21.2d, v19.4s
 ; CHECK-SD-NEXT:    uaddw v6.2d, v20.2d, v19.2s
-; CHECK-SD-NEXT:    mov v7.16b, v16.16b
 ; CHECK-SD-NEXT:    ret
 ;
 ; CHECK-GI-LABEL: mla_i64:
diff --git a/llvm/test/CodeGen/AArch64/addp-shuffle.ll b/llvm/test/CodeGen/AArch64/addp-shuffle.ll
index fb96d11acc275a..3dd6068ea3c227 100644
--- a/llvm/test/CodeGen/AArch64/addp-shuffle.ll
+++ b/llvm/test/CodeGen/AArch64/addp-shuffle.ll
@@ -63,9 +63,8 @@ define <16 x i8> @deinterleave_shuffle_v32i8(<32 x i8> %a) {
 define <4 x i64> @deinterleave_shuffle_v8i64(<8 x i64> %a) {
 ; CHECK-LABEL: deinterleave_shuffle_v8i64:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    addp v2.2d, v2.2d, v3.2d
 ; CHECK-NEXT:    addp v0.2d, v0.2d, v1.2d
-; CHECK-NEXT:    mov v1.16b, v2.16b
+; CHECK-NEXT:    addp v1.2d, v2.2d, v3.2d
 ; CHECK-NEXT:    ret
   %r0 = shufflevector <8 x i64> %a, <8 x i64> poison, <4 x i32> <i32 0, i32 2, i32 4, i32 6>
   %r1 = shufflevector <8 x i64> %a, <8 x i64> poison, <4 x i32> <i32 1, i32 3, i32 5, i32 7>
diff --git a/llvm/test/CodeGen/AArch64/cgp-usubo.ll b/llvm/test/CodeGen/AArch64/cgp-usubo.ll
index d307107fc07ee6..c8b73625086644 100644
--- a/llvm/test/CodeGen/AArch64/cgp-usubo.ll
+++ b/llvm/test/CodeGen/AArch64/cgp-usubo.ll
@@ -40,9 +40,8 @@ define i1 @usubo_ugt_constant_op0_i8(i8 %x, ptr %p) nounwind {
 ; CHECK-NEXT:    mov w9, #42 // =0x2a
 ; CHECK-NEXT:    cmp w8, #42
 ; CHECK-NEXT:    sub w9, w9, w0
-; CHECK-NEXT:    cset w8, hi
+; CHECK-NEXT:    cset w0, hi
 ; CHECK-NEXT:    strb w9, [x1]
-; CHECK-NEXT:    mov w0, w8
 ; CHECK-NEXT:    ret
   %s = sub i8 42, %x
   %ov = icmp ugt i8 %x, 42
@@ -59,9 +58,8 @@ define i1 @usubo_ult_constant_op0_i16(i16 %x, ptr %p) nounwind {
 ; CHECK-NEXT:    mov w9, #43 // =0x2b
 ; CHECK-NEXT:    cmp w8, #43
 ; CHECK-NEXT:    sub w9, w9, w0
-; CHECK-NEXT:    cset w8, hi
+; CHECK-NEXT:    cset w0, hi
 ; CHECK-NEXT:    strh w9, [x1]
-; CHECK-NEXT:    mov w0, w8
 ; CHECK-NEXT:    ret
   %s = sub i16 43, %x
   %ov = icmp ult i16 43, %x
@@ -78,8 +76,7 @@ define i1 @usubo_ult_constant_op1_i16(i16 %x, ptr %p) nounwind {
 ; CHECK-NEXT:    sub w9, w0, #44
 ; CHECK-NEXT:    cmp w8, #44
 ; CHECK-NEXT:    strh w9, [x1]
-; CHECK-NEXT:    cset w8, lo
-; CHECK-NEXT:    mov w0, w8
+; CHECK-NEXT:    cset w0, lo
 ; CHECK-NEXT:    ret
   %s = add i16 %x, -44
   %ov = icmp ult i16 %x, 44
@@ -94,8 +91,7 @@ define i1 @usubo_ugt_constant_op1_i8(i8 %x, ptr %p) nounwind {
 ; CHECK-NEXT:    sub w9, w0, #45
 ; CHECK-NEXT:    cmp w8, #45
 ; CHECK-NEXT:    strb w9, [x1]
-; CHECK-NEXT:    cset w8, lo
-; CHECK-NEXT:    mov w0, w8
+; CHECK-NEXT:    cset w0, lo
 ; CHECK-NEXT:    ret
   %ov = icmp ugt i8 45, %x
   %s = add i8 %x, -45
@@ -110,9 +106,8 @@ define i1 @usubo_eq_constant1_op1_i32(i32 %x, ptr %p) nounwind {
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    cmp w0, #0
 ; CHECK-NEXT:    sub w9, w0, #1
-; CHECK-NEXT:    cset w8, eq
+; CHECK-NEXT:    cset w0, eq
 ; CHECK-NEXT:    str w9, [x1]
-; CHECK-NEXT:    mov w0, w8
 ; CHECK-NEXT:    ret
   %s = add i32 %x, -1
   %ov = icmp eq i32 %x, 0
diff --git a/llvm/test/CodeGen/AArch64/machine-cp.mir b/llvm/test/CodeGen/AArch64/machine-cp.mir
new file mode 100644
index 00000000000000..de57627d82c57c
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/machine-cp.mir
@@ -0,0 +1,215 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -o - %s -O3 --run-pass=machine-cp -mtriple=aarch64 --verify-machineinstrs | FileCheck %s
+
+
+
+--- |
+  target datalayout = "e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128-Fn32"
+  target triple = "aarch64"
+
+  declare dso_local i32 @chain(i32 noundef, i32 noundef) local_unnamed_addr
+
+  declare dso_local void @init_var(ptr noundef) local_unnamed_addr
+
+  define dso_local void @fun2(i64 %a, i64 %b) local_unnamed_addr {
+  entry:
+    %c = alloca i32, align 4
+    %0 = trunc i64 %a to i32
+    %1 = trunc i64 %b to i32
+    %call = call i32 @chain(i32 noundef %0, i32 noundef %1)
+    %2 = load i32, ptr %c, align 4
+    %call1 = call i32 @chain(i32 noundef %2, i32 noundef %call)
+    ret void
+  }
+
+  define dso_local void @cantReasonAboutMultiLevelDepTrees(i64 %a, i64 %b) local_unnamed_addr {
+  entry:
+    %c = alloca i32, align 4
+    %0 = trunc i64 %a to i32
+    %1 = trunc i64 %b to i32
+    %call = call i32 @chain(i32 noundef %0, i32 noundef %1)
+    %2 = load i32, ptr %c, align 4
+    %call1 = call i32 @chain(i32 noundef %2, i32 noundef %call)
+    ret void
+  }
+
+...
+---
+name:            fun2
+alignment:       4
+exposesReturnsTwice: false
+legalized:       false
+regBankSelected: false
+selected:        false
+failedISel:      false
+tracksRegLiveness: true
+hasWinCFI:       false
+callsEHReturn:   false
+callsUnwindInit: false
+hasEHCatchret:   false
+hasEHScopes:     false
+hasEHFunclets:   false
+isOutlined:      false
+debugInstrRef:   false
+failsVerification: false
+tracksDebugUserValues: true
+registers:       []
+liveins:
+  - { reg: '$x0', virtual-reg: '' }
+  - { reg: '$x1', virtual-reg: '' }
+frameInfo:
+  isFrameAddressTaken: false
+  isReturnAddressTaken: false
+  hasStackMap:     false
+  hasPatchPoint:   false
+  stackSize:       0
+  offsetAdjustment: 0
+  maxAlignment:    4
+  adjustsStack:    true
+  hasCalls:        true
+  stackProtector:  ''
+  functionContext: ''
+  maxCallFrameSize: 0
+  cvBytesOfCalleeSavedRegisters: 0
+  hasOpaqueSPAdjustment: false
+  hasVAStart:      false
+  hasMustTailInVarArgFunc: false
+  hasTailCall:     false
+  isCalleeSavedInfoValid: false
+  localFrameSize:  4
+  savePoint:       ''
+  restorePoint:    ''
+fixedStack:      []
+stack:
+  - { id: 0, name: c, type: default, offset: 0, size: 4, alignment: 4,
+      stack-id: default, callee-saved-register: '', callee-saved-restored: true,
+      local-offset: -4, debug-info-variable: '', debug-info-expression: '',
+      debug-info-location: '' }
+entry_values:    []
+callSites:       []
+debugValueSubstitutions: []
+constants:       []
+machineFunctionInfo: {}
+body:             |
+  bb.0.entry:
+    liveins: $x0, $x1
+
+    ; CHECK-LABEL: name: fun2
+    ; CHECK: liveins: $x0, $x1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: $w0 = KILL renamable $w0, implicit killed $x0
+    ; CHECK-NEXT: $w1 = KILL renamable $w1, implicit killed $x1
+    ; CHECK-NEXT: BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def $w0
+    ; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: renamable $w1 = COPY $w0
+    ; CHECK-NEXT: $w0 = LDRWui %stack.0.c, 0 :: (dereferenceable load (s32) from %ir.c)
+    ; CHECK-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def dead $w0
+    ; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: RET_ReallyLR
+    ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    $w0 = KILL renamable $w0, implicit killed $x0
+    $w1 = KILL renamable $w1, implicit killed $x1
+    BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def $w0
+    ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    renamable $w8 = LDRWui %stack.0.c, 0 :: (dereferenceable load (s32) from %ir.c)
+    renamable $w1 = COPY $w0
+    ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    $w0 = COPY killed renamable $w8
+    BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def dead $w0
+    ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    RET_ReallyLR
+
+...
+---
+name:            cantReasonAboutMultiLevelDepTrees
+alignment:       4
+exposesReturnsTwice: false
+legalized:       false
+regBankSelected: false
+selected:        false
+failedISel:      false
+tracksRegLiveness: true
+hasWinCFI:       false
+callsEHReturn:   false
+callsUnwindInit: false
+hasEHCatchret:   false
+hasEHScopes:     false
+hasEHFunclets:   false
+isOutlined:      false
+debugInstrRef:   false
+failsVerification: false
+tracksDebugUserValues: true
+registers:       []
+liveins:
+  - { reg: '$x0', virtual-reg: '' }
+  - { reg: '$x1', virtual-reg: '' }
+frameInfo:
+  isFrameAddressTaken: false
+  isReturnAddressTaken: false
+  hasStackMap:     false
+  hasPatchPoint:   false
+  stackSize:       0
+  offsetAdjustment: 0
+  maxAlignment:    4
+  adjustsStack:    true
+  hasCalls:        true
+  stackProtector:  ''
+  functionContext: ''
+  maxCallFrameSize: 0
+  cvBytesOfCalleeSavedRegisters: 0
+  hasOpaqueSPAdjustment: false
+  hasVAStart:      false
+  hasMustTailInVarArgFunc: false
+  hasTailCall:     false
+  isCalleeSavedInfoValid: false
+  localFrameSize:  4
+  savePoint:       ''
+  restorePoint:    ''
+fixedStack:      []
+stack:
+  - { id: 0, name: c, type: default, offset: 0, size: 4, alignment: 4,
+      stack-id: default, callee-saved-register: '', callee-saved-restored: true,
+      local-offset: -4, debug-info-variable: '', debug-info-expression: '',
+      debug-info-location: '' }
+entry_values:    []
+callSites:       []
+debugValueSubstitutions: []
+constants:       []
+machineFunctionInfo: {}
+body:             |
+  bb.0.entry:
+    liveins: $x0, $x1
+
+    ; CHECK-LABEL: name: cantReasonAboutMultiLevelDepTrees
+    ; CHECK: liveins: $x0, $x1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: $w0 = KILL renamable $w0, implicit killed $x0
+    ; CHECK-NEXT: $w1 = KILL renamable $w1, implicit killed $x1
+    ; CHECK-NEXT: BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def $w0
+    ; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: renamable $w8 = LDRWui %stack.0.c, 0 :: (dereferenceable load (s32) from %ir.c)
+    ; CHECK-NEXT: renamable $w1 = COPY $w0
+    ; CHECK-NEXT: $w1 = ADDWrr $w1, $w0
+    ; CHECK-NEXT: ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: $w0 = COPY killed renamable $w8
+    ; CHECK-NEXT: BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def dead $w0
+    ; CHECK-NEXT: ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    ; CHECK-NEXT: RET_ReallyLR
+    ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    $w0 = KILL renamable $w0, implicit killed $x0
+    $w1 = KILL renamable $w1, implicit killed $x1
+    BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def $w0
+    ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    renamable $w8 = LDRWui %stack.0.c, 0 :: (dereferenceable load (s32) from %ir.c)
+    renamable $w1 = COPY $w0
+    $w1 = ADDWrr $w1, $w0
+    ADJCALLSTACKDOWN 0, 0, implicit-def dead $sp, implicit $sp
+    $w0 = COPY killed renamable $w8
+    BL @chain, csr_aarch64_aapcs, implicit-def dead $lr, implicit $sp, implicit $w0, implicit $w1, implicit-def $sp, implicit-def dead $w0
+    ADJCALLSTACKUP 0, 0, implicit-def dead $sp, implicit $sp
+    RET_ReallyLR
+
+...
diff --git a/llvm/test/CodeGen/AArch64/neon-extmul.ll b/llvm/test/CodeGen/AArch64/neon-extmul.ll
index 3dbc033dfab964..cdd04eb3fa9c37 100644
--- a/llvm/test/CodeGen/AArch64/neon-extmul.ll
+++ b/llvm/test/CodeGen/AArch64/neon-extmul.ll
@@ -296,13 +296,12 @@ define <8 x i64> @extmuladds_v8i8_i64(<8 x i8> %s0, <8 x i8> %s1, <8 x i64> %b)
 ; CHECK-SD-LABEL: extmuladds_v8i8_i64:
 ; CHECK-SD:       // %bb.0: // %entry
 ; CHECK-SD-NEXT:    smull v0.8h, v0.8b, v1.8b
-; CHECK-SD-NEXT:    sshll2 v6.4s, v0.8h, #0
 ; CHECK-SD-NEXT:    sshll v1.4s, v0.4h, #0
-; CHECK-SD-NEXT:    saddw2 v5.2d, v5.2d, v6.4s
+; CHECK-SD-NEXT:    sshll2 v6.4s, v0.8h, #0
 ; CHECK-SD-NEXT:    saddw v0.2d, v2.2d, v1.2s
 ; CHECK-SD-NEXT:    saddw2 v1.2d, v3.2d, v1.4s
+; CHECK-SD-NEXT:    saddw2 v3.2d, v5.2d, v6.4s
 ; CHECK-SD-NEXT:    saddw v2.2d, v4.2d, v6.2s
-; CHECK-SD-NEXT:    mov v3.16b, v5.16b
 ; CHECK-SD-NEXT:    ret
 ;
 ; CHECK-GI-LABEL: extmuladds_v8i8_i64:
@@ -334,13 +333,12 @@ define <8 x i64> @extmuladdu_v8i8_i64(<8 x i8> %s0, <8 x i8> %s1, <8 x i64> %b)
 ; CHECK-SD-LABEL: extmuladdu_v8i8_i64:
 ; CHECK-SD:       // %bb.0: // %entry
 ; CHECK-SD-NEXT:    umull v0.8h, v0.8b, v1.8b
-; CHECK-SD-NEXT:    ushll2 v6.4s, v0.8h, #0
 ; CHECK-SD-NEXT:    ushll v1.4s, v0.4h, #0
-; CHECK-SD-NEXT:    uaddw2 v5.2d, v5.2d, v6.4s
+; CHECK-SD-NEXT:    ushll2 v6.4s, v0.8h, #0
 ; CHECK-SD-NEXT:    uaddw v0.2d, v2.2d, v1.2s
 ; CHECK-SD-NEXT:    uaddw2 v1.2d, v3.2d, v1.4s
+; CHECK-SD-NEXT:    uaddw2 v3.2d, v5.2d, v6.4s
 ; CHECK-SD-NEXT:    uaddw v2.2d, v4.2d, v6.2s
-; CHECK-SD-NEXT:    mov v3.16b, v5.16b
 ; CHECK-SD-NEXT:    ret
 ;
 ; CHECK-GI-LABEL: extmuladdu_v8i8_i64:
diff --git a/llvm/test/CodeGen/AArch64/shufflevector.ll b/llvm/test/CodeGen/AArch64/shufflevector.ll
index b1131f287fe9a9..56ff1c88e34f65 100644
--- a/llvm/test/CodeGen/AArch64/shufflevector.ll
+++ b/llvm/test/CodeGen/AArch64/shufflevector.ll
@@ -355,18 +355,11 @@ define <8 x i32> @shufflevector_v8i32(<8 x i32> %a, <8 x i32> %b) {
 }
 
 define <4 x i64> @shufflevector_v4i64(<4 x i64> %a, <4 x i64> %b) {
-; CHECK-SD-LABEL: shufflevector_v4i64:
-; CHECK-SD:       // %bb.0:
-; CHECK-SD-NEXT:    zip2 v2.2d, v2.2d, v3.2d
-; CHECK-SD-NEXT:    zip2 v0.2d, v0.2d, v1.2d
-; CHECK-SD-NEXT:    mov v1.16b, v2.16b
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: shufflevector_v4i64:
-; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    zip2 v0.2d, v0.2d, v1.2d
-; CHECK-GI-NEXT:    zip2 v1.2d, v2.2d, v3.2d
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: shufflevector_v4i64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    zip2 v0.2d, v0.2d, v1.2d
+; CHECK-NEXT:    zip2 v1.2d, v2.2d, v3.2d
+; CHECK-NEXT:    ret
     %c = shufflevector <4 x i64> %a, <4 x i64> %b, <4 x i32> <i32 1, i32 3, i32 5, i32 7>
     ret <4 x i64> %c
 }
diff --git a/llvm/test/CodeGen/AArch64/streaming-compatible-memory-ops.ll b/llvm/test/CodeGen/AArch64/streaming-compatible-memory-ops.ll
index 106d6190e88b9c..256ee4966aee43 100644
--- a/llvm/test/CodeGen/AArch64/streaming-compatible-memory-ops.ll
+++ b/llvm/test/CodeGen/AArch64/streaming-compatible-memory-ops.ll
@@ -180,16 +180,15 @@ define void @sc_memcpy(i64 noundef %n) "aarch64_pstate_sm_compatible" nounwind {
 ; CHECK-NO-SME-ROUTINES-NEXT:    stp x30, x9, [sp, #64] // 16-byte Folded Spill
 ; CHECK-NO-SME-ROUTINES-NEXT:    str x19, [sp, #80] // 8-byte Folded Spill
 ; CHECK-NO-SME-ROUTINES-NEXT:    bl __arm_sme_state
-; CHECK-NO-SME-ROUTINES-NEXT:    adrp x8, :got:dst
 ; CHECK-NO-SME-ROUTINES-NEXT:    and x19, x0, #0x1
+; CHECK-NO-SME-ROUTINES-NEXT:    adrp x0, :got:dst
 ; CHECK-NO-SME-ROUTINES-NEXT:    adrp x1, :got:src
-; CHECK-NO-SME-ROUTINES-NEXT:    ldr x8, [x8, :got_lo12:dst]
+; CHECK-NO-SME-ROUTINES-NEXT:    ldr x0, [x0, :got_lo12:dst]
 ; CHECK-NO-SME-ROUTINES-NEXT:    ldr x1, [x1, :got_lo12:src]
 ; CHECK-NO-SME-ROUTINES-NEXT:    tbz w19, #0, .LBB3_2
 ; CHECK-NO-SME-ROUTINES-NEXT:  // %bb.1: // %entry
 ; CHECK-NO-SME-ROUTINES-NEXT:    smstop sm
 ; CHECK-NO-SME-ROUTINES-NEXT:  .LBB3_2: // %entry
-; CHECK-NO-SME-ROUTINES-NEXT:    mov x0, x8
 ; CHECK-NO-SME-ROUTINES-NEXT:    bl memcpy
 ; CHECK-NO-SME-ROUTINES-NEXT:    tbz w19, #0, .LBB3_4
 ; CHECK-NO-SME-ROUTINES-NEXT:  // %bb.3: // %entry
diff --git a/llvm/test/CodeGen/AArch64/sve-vector-deinterleave.ll b/llvm/test/CodeGen/AArch64/sve-vector-deinterleave.ll
index fd1365d56fee47..a41eb89e32bb46 100644
--- a/llvm/test/CodeGen/AArch64/sve-vector-deinterleave.ll
+++ b/llvm/test/CodeGen/AArch64/sve-vector-deinterleave.ll
@@ -172,11 +172,10 @@ define {<vscale x 4 x i64>, <vscale x 4 x i64>} @vector_deinterleave_nxv4i64_nxv
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    uzp1 z4.d, z2.d, z3.d
 ; CHECK-NEXT:    uzp1 z5.d, z0.d, z1.d
-; CHECK-NEXT:    uzp2 z6.d, z0.d, z1.d
 ; CHECK-NEXT:    uzp2 z3.d, z2.d, z3.d
+; CHECK-NEXT:    uzp2 z2.d, z0.d, z1.d
 ; CHECK-NEXT:    mov z0.d, z5.d
 ; CHECK-NEXT:    mov z1.d, z4.d
-; CHECK-NEXT:    mov z2.d, z6.d
 ; CHECK-NEXT:    ret
 %retval = call {<vscale x 4 x i64>, <vscale x 4 x i64>} @llvm.vector.deinterleave2.nxv8i64(<vscale x 8 x i64> %vec)
 ret {<vscale x 4 x i64>, <vscale x 4 x i64>} %retval
@@ -187,19 +186,16 @@ define {<vscale x 8 x i64>, <vscale x 8 x i64>}  @vector_deinterleave_nxv8i64_nx
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    uzp1 z24.d, z2.d, z3.d
 ; CHECK-NEXT:    uzp1 z25.d, z0.d, z1.d
-; CHECK-NEXT:    uzp1 z26.d, z4.d, z5.d
-; CHECK-NEXT:    uzp1 z27.d, z6.d, z7.d
-; CHECK-NEXT:    uzp2 z28.d, z0.d, z1.d
 ; CHECK-NEXT:    uzp2 z29.d, z2.d, z3.d
-; CHECK-NEXT:    uzp2 z30.d, z4.d, z5.d
+; CHECK-NEXT:    uzp2 z28.d, z0.d, z1.d
+; CHECK-NEXT:    uzp1 z2.d, z4.d, z5.d
+; CHECK-NEXT:    uzp1 z3.d, z6.d, z7.d
 ; CHECK-NEXT:    uzp2 z7.d, z6.d, z7.d
+; CHECK-NEXT:    uzp2 z6.d, z4.d, z5.d
 ; CHECK-NEXT:    mov z0.d, z25.d
 ; CHECK-NEXT:    mov z1.d, z24.d
-; CHECK-NEXT:    mov z2.d, z26.d
-; CHECK-NEXT:    mov z3.d, z27.d
-; CHECK-NEXT:    mov z4.d, z28.d
 ; CHECK-NEXT:    mov z5.d, z29.d
-; CHECK-NEXT:    mov z6.d, z30.d
+; CHECK-NEXT:    mov z4.d, z28.d
 ; CHECK-NEXT:    ret
 %retval = call {<vscale x 8 x i64>, <vscale x 8 x i64>} @llvm.vector.deinterleave2.nxv16i64(<vscale x 16 x i64> %vec)
 ret {<vscale x 8 x i64>, <vscale x 8 x i64>}  %retval
diff --git a/llvm/test/CodeGen/AArch64/sve-vector-interleave.ll b/llvm/test/CodeGen/AArch64/sve-vector-interleave.ll
index e2c3b0abe21aab..fe089fa4a6417e 100644
--- a/llvm/test/CodeGen/AArch64/sve-vector-interleave.ll
+++ b/llvm/test/CodeGen/AArch64/sve-vector-interleave.ll
@@ -166,10 +166,9 @@ define <vscale x 16 x i32> @interleave2_nxv16i32(<vscale x 8 x i32> %vec0, <vsca
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    zip1 z4.s, z1.s, z3.s
 ; CHECK-NEXT:    zip1 z5.s, z0.s, z2.s
-; CHECK-NEXT:    zip2 z2.s, z0.s, z2.s
 ; CHECK-NEXT:    zip2 z3.s, z1.s, z3.s
+; CHECK-NEXT:    zip2 z1.s, z0.s, z2.s
 ; CHECK-NEXT:    mov z0.d, z5.d
-; CHECK-NEXT:    mov z1.d, z2.d
 ; CHECK-NEXT:    mov z2.d, z4.d
 ; CHECK-NEXT:    ret
   %retval = call <vscale x 16 x i32>@llvm.vector.interleave2.nxv16i32(<vscale x 8 x i32> %vec0, <vscale x 8 x i32> %vec1)
@@ -181,10 +180,9 @@ define <vscale x 8 x i64> @interleave2_nxv8i64(<vscale x 4 x i64> %vec0, <vscale
 ; CHECK:       // %bb.0:
 ; CHECK-NEXT:    zip1 z4.d, z1.d, z3.d
 ; CHECK-NEXT:    zip1 z5.d, z0.d, z2.d
-; CHECK-NEXT:    zip2 z2.d, z0.d, z2.d
 ; CHECK-NEXT:    zip2 z3.d, z1.d, z3.d
+; CHECK-NEXT:    zip2 z1.d, z0.d, z2.d
 ; CHECK-NEXT:    mov z0.d, z5.d
-; CHECK-NEXT:    mov z1.d, z2.d
 ; CHECK-NEXT:    mov z2.d, z4.d
 ; CHECK-NEXT:    ret
   %retval = call <vscale x 8 x i64> @llvm.vector.interleave2.nxv8i64(<vscale x 4 x i64> %vec0, <vscale x 4 x i64> %vec1)
diff --git a/llvm/test/CodeGen/AArch64/vec_umulo.ll b/llvm/test/CodeGen/AArch64/vec_umulo.ll
index 3a481efd9785aa..9fe249dc6d606f 100644
--- a/llvm/test/CodeGen/AArch64/vec_umulo.ll
+++ b/llvm/test/CodeGen/AArch64/vec_umulo.ll
@@ -60,8 +60,7 @@ define <3 x i32> @umulo_v3i32(<3 x i32> %a0, <3 x i32> %a1, ptr %p2) nounwind {
 ; CHECK-NEXT:    uzp2 v2.4s, v3.4s, v2.4s
 ; CHECK-NEXT:    st1 { v1.s }[2], [x8]
 ; CHECK-NEXT:    str d1, [x0]
-; CHECK-NEXT:    cmtst v2.4s, v2.4s, v2.4s
-; CHECK-NEXT:    mov v0.16b, v2.16b
+; CHECK-NEXT:    cmtst v0.4s, v2.4s, v2.4s
 ; CHECK-NEXT:    ret
   %t = call {<3 x i32>, <3 x i1>} @llvm.umul.with.overflow.v3i32(<3 x i32> %a0, <3 x i32> %a1)
   %val = extractvalue {<3 x i32>, <3 x i1>} %t, 0
@@ -79,8 +78,7 @@ define <4 x i32> @umulo_v4i32(<4 x i32> %a0, <4 x i32> %a1, ptr %p2) nounwind {
 ; CHECK-NEXT:    mul v1.4s, v0.4s, v1.4s
 ; CHECK-NEXT:    uzp2 v2.4s, v3.4s, v2.4s
 ; CHECK-NEXT:    str q1, [x0]
-; CHECK-NEXT:    cmtst v2.4s, v2.4s, v2.4s
-; CHECK-NEXT:    mov v0.16b, v2.16b
+; CHECK-NEXT:    cmtst v0.4s, v2.4s, v2.4s
 ; CHECK-NEXT:    ret
   %t = call {<4 x i32>, <4 x i1>} @llvm.umul.with.overflow.v4i32(<4 x i32> %a0, <4 x i32> %a1)
   %val = extractvalue {<4 x i32>, <4 x i1>} %t, 0
@@ -143,14 +141,13 @@ define <8 x i32> @umulo_v8i32(<8 x i32> %a0, <8 x i32> %a1, ptr %p2) nounwind {
 ; CHECK-NEXT:    umull v5.2d, v0.2s, v2.2s
 ; CHECK-NEXT:    umull2 v6.2d, v1.4s, v3.4s
 ; CHECK-NEXT:    umull v7.2d, v1.2s, v3.2s
-; CHECK-NEXT:    mul v1.4s, v1.4s, v3.4s
 ; CHECK-NEXT:    mul v2.4s, v0.4s, v2.4s
+; CHECK-NEXT:    mul v1.4s, v1.4s, v3.4s
 ; CHECK-NEXT:    uzp2 v4.4s, v5.4s, v4.4s
 ; CHECK-NEXT:    uzp2 v5.4s, v7.4s, v6.4s
 ; CHECK-NEXT:    stp q2, q1, [x0]
-; CHECK-NEXT:    cmtst v4.4s, v4.4s, v4.4s
+; CHECK-NEXT:    cmtst v0.4s, v4.4s, v4.4s
 ; CHECK-NEXT:    cmtst v5.4s, v5.4s, v5.4s
-; CHECK-NEXT:    mov v0.16b, v4.16b
 ; CHECK-NEXT:    mov v1.16b, v5.16b
 ; CHECK-NEXT:    ret
   %t = call {<8 x i32>, <8 x i1>} @llvm.umul.with.overflow.v8i32(<8 x i32> %a0, <8 x i32> %a1)
diff --git a/llvm/test/CodeGen/AArch64/vselect-ext.ll b/llvm/test/CodeGen/AArch64/vselect-ext.ll
index 0b90343a40c839..0aec5de3031dc5 100644
--- a/llvm/test/CodeGen/AArch64/vselect-ext.ll
+++ b/llvm/test/CodeGen/AArch64/vselect-ext.ll
@@ -333,8 +333,8 @@ define <16 x i32> @same_zext_used_in_cmp_unsigned_pred_and_select_other_use(<16
 ; CHECK-LABEL: same_zext_used_in_cmp_unsigned_pred_and_select_other_use:
 ; CHECK:       ; %bb.0: ; %entry
 ; CHECK-NEXT:    movi.16b v16, #10
-; CHECK-NEXT:    ushll.8h v19, v0, #0
 ; CHECK-NEXT:    ldr q21, [sp]
+; CHECK-NEXT:    ushll.8h v19, v0, #0
 ; CHECK-NEXT:    ushll.4s v24, v19, #0
 ; CHECK-NEXT:    ushll2.4s v19, v19, #0
 ; CHECK-NEXT:    cmhi.16b v16, v0, v16
@@ -352,8 +352,8 @@ define <16 x i32> @same_zext_used_in_cmp_unsigned_pred_and_select_other_use(<16
 ; CHECK-NEXT:    sshll2.2d v26, v17, #0
 ; CHECK-NEXT:    sshll.2d v27, v17, #0
 ; CHECK-NEXT:    and.16b v20, v21, v20
-; CHECK-NEXT:    sshll2.2d v21, v22, #0
 ; CHECK-NEXT:    and.16b v7, v7, v23
+; CHECK-NEXT:    sshll2.2d v21, v22, #0
 ; CHECK-NEXT:    sshll.2d v23, v22, #0
 ; CHECK-NEXT:    and.16b v6, v6, v26
 ; CHECK-NEXT:    sshll2.2d v26, v16, #0
@@ -361,16 +361,14 @@ define <16 x i32> @same_zext_used_in_cmp_unsigned_pred_and_select_other_use(<16
 ; CHECK-NEXT:    stp q7, q20, [x0, #96]
 ; CHECK-NEXT:    sshll.2d v20, v16, #0
 ; CHECK-NEXT:    and.16b v21, v4, v21
-; CHECK-NEXT:    and.16b v4, v0, v18
 ; CHECK-NEXT:    and.16b v7, v3, v23
-; CHECK-NEXT:    and.16b v3, v19, v22
-; CHECK-NEXT:    stp q5, q6, [x0, #64]
+; CHECK-NEXT:    and.16b v3, v0, v18
 ; CHECK-NEXT:    and.16b v0, v24, v16
+; CHECK-NEXT:    stp q5, q6, [x0, #64]
 ; CHECK-NEXT:    and.16b v6, v2, v26
 ; CHECK-NEXT:    and.16b v2, v25, v17
 ; CHECK-NEXT:    and.16b v5, v1, v20
-; CHECK-NEXT:    mov.16b v1, v3
-; CHECK-NEXT:    mov.16b v3, v4
+; CHECK-NEXT:    and.16b v1, v19, v22
 ; CHECK-NEXT:    stp q7, q21, [x0, #32]
 ; CHECK-NEXT:    stp q5, q6, [x0]
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/AMDGPU/atomic_optimizations_pixelshader.ll b/llvm/test/CodeGen/AMDGPU/atomic_optimizations_pixelshader.ll
index 29704959fc1763..9e26ddabf5e043 100644
--- a/llvm/test/CodeGen/AMDGPU/atomic_optimizations_pixelshader.ll
+++ b/llvm/test/CodeGen/AMDGPU/atomic_optimizations_pixelshader.ll
@@ -463,20 +463,20 @@ define amdgpu_ps void @add_i32_varying(ptr addrspace(8) inreg %out, ptr addrspac
 ; GFX1032-NEXT:    v_mov_b32_e32 v2, v1
 ; GFX1032-NEXT:    v_permlanex16_b32 v2, v2, -1, -1
 ; GFX1032-NEXT:    v_add_nc_u32_dpp v1, v2, v1 quad_perm:[0,1,2,3] row_mask:0xa bank_mask:0xf
-; GFX1032-NEXT:    v_readlane_b32 s11, v1, 31
 ; GFX1032-NEXT:    v_mov_b32_dpp v3, v1 row_shr:1 row_mask:0xf bank_mask:0xf
 ; GFX1032-NEXT:    v_readlane_b32 s10, v1, 15
+; GFX1032-NEXT:    v_writelane_b32 v3, s10, 16
+; GFX1032-NEXT:    v_readlane_b32 s10, v1, 31
 ; GFX1032-NEXT:    s_mov_b32 exec_lo, s9
 ; GFX1032-NEXT:    v_mbcnt_lo_u32_b32 v0, exec_lo, 0
 ; GFX1032-NEXT:    s_or_saveexec_b32 s9, -1
-; GFX1032-NEXT:    v_writelane_b32 v3, s10, 16
 ; GFX1032-NEXT:    s_mov_b32 exec_lo, s9
 ; GFX1032-NEXT:    v_cmp_eq_u32_e32 vcc_lo, 0, v0
 ; GFX1032-NEXT:    ; implicit-def: $vgpr0
 ; GFX1032-NEXT:    s_and_saveexec_b32 s9, vcc_lo
 ; GFX1032-NEXT:    s_cbranch_execz .LBB1_3
 ; GFX1032-NEXT:  ; %bb.2:
-; GFX1032-NEXT:    v_mov_b32_e32 v0, s11
+; GFX1032-NEXT:    v_mov_b32_e32 v0, s10
 ; GFX1032-NEXT:    buffer_atomic_add v0, off, s[4:7], 0 glc
 ; GFX1032-NEXT:  .LBB1_3:
 ; GFX1032-NEXT:    s_waitcnt_depctr 0xffe3
@@ -598,22 +598,22 @@ define amdgpu_ps void @add_i32_varying(ptr addrspace(8) inreg %out, ptr addrspac
 ; GFX1132-NEXT:    v_permlanex16_b32 v2, v2, -1, -1
 ; GFX1132-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
 ; GFX1132-NEXT:    v_add_nc_u32_dpp v1, v2, v1 quad_perm:[0,1,2,3] row_mask:0xa bank_mask:0xf
-; GFX1132-NEXT:    v_readlane_b32 s11, v1, 31
 ; GFX1132-NEXT:    v_mov_b32_dpp v3, v1 row_shr:1 row_mask:0xf bank_mask:0xf
 ; GFX1132-NEXT:    v_readlane_b32 s10, v1, 15
+; GFX1132-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1)
+; GFX1132-NEXT:    v_writelane_b32 v3, s10, 16
+; GFX1132-NEXT:    v_readlane_b32 s10, v1, 31
 ; GFX1132-NEXT:    s_mov_b32 exec_lo, s9
-; GFX1132-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
 ; GFX1132-NEXT:    v_mbcnt_lo_u32_b32 v0, exec_lo, 0
 ; GFX1132-NEXT:    s_or_saveexec_b32 s9, -1
-; GFX1132-NEXT:    v_writelane_b32 v3, s10, 16
+; GFX1132-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
 ; GFX1132-NEXT:    s_mov_b32 exec_lo, s9
-; GFX1132-NEXT:    s_delay_alu instid0(VALU_DEP_2)
 ; GFX1132-NEXT:    v_cmp_eq_u32_e32 vcc_lo, 0, v0
 ; GFX1132-NEXT:    ; implicit-def: $vgpr0
 ; GFX1132-NEXT:    s_and_saveexec_b32 s9, vcc_lo
 ; GFX1132-NEXT:    s_cbranch_execz .LBB1_3
 ; GFX1132-NEXT:  ; %bb.2:
-; GFX1132-NEXT:    v_mov_b32_e32 v0, s11
+; GFX1132-NEXT:    v_mov_b32_e32 v0, s10
 ; GFX1132-NEXT:    buffer_atomic_add_u32 v0, off, s[4:7], 0 glc
 ; GFX1132-NEXT:  .LBB1_3:
 ; GFX1132-NEXT:    s_or_b32 exec_lo, exec_lo, s9
diff --git a/llvm/test/CodeGen/AMDGPU/vector_shuffle.packed.ll b/llvm/test/CodeGen/AMDGPU/vector_shuffle.packed.ll
index 66c49ba8b734db..dc1f2c82f7f4a9 100644
--- a/llvm/test/CodeGen/AMDGPU/vector_shuffle.packed.ll
+++ b/llvm/test/CodeGen/AMDGPU/vector_shuffle.packed.ll
@@ -155,22 +155,23 @@ define <4 x half> @shuffle_v4f16_3u6u(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4f16_3u6u:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4f16_3u6u:
@@ -193,22 +194,23 @@ define <4 x half> @shuffle_v4f16_3uu7(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4f16_3uu7:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4f16_3uu7:
@@ -364,22 +366,23 @@ define <4 x half> @shuffle_v4f16_0145(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4f16_0145:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4f16_0145:
@@ -400,22 +403,23 @@ define <4 x half> @shuffle_v4f16_0167(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4f16_0167:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4f16_0167:
@@ -496,22 +500,23 @@ define <4 x half> @shuffle_v4f16_2345(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4f16_2345:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4f16_2345:
@@ -532,22 +537,23 @@ define <4 x half> @shuffle_v4f16_2367(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4f16_2367:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4f16_2367:
@@ -1069,22 +1075,23 @@ define <4 x i16> @shuffle_v4i16_0167(ptr addrspace(1) %arg0, ptr addrspace(1) %a
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4i16_0167:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4i16_0167:
@@ -1366,22 +1373,23 @@ define <4 x half> @shuffle_v8f16_4589(ptr addrspace(1) %arg0, ptr addrspace(1) %
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off offset:8
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v8f16_4589:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off offset:8
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v8f16_4589:
@@ -1543,11 +1551,12 @@ define <6 x half> @shuffle_v6f16_452367(ptr addrspace(1) %arg0, ptr addrspace(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v4, v3
 ; GFX9-NEXT:    v_mov_b32_e32 v3, v2
 ; GFX9-NEXT:    global_load_dwordx3 v[0:2], v[5:6], off
-; GFX9-NEXT:    global_load_dword v7, v[3:4], off
-; GFX9-NEXT:    s_waitcnt vmcnt(1)
+; GFX9-NEXT:    ; kill: killed $vgpr3 killed $vgpr4
+; GFX9-NEXT:    ; kill: killed $vgpr5 killed $vgpr6
+; GFX9-NEXT:    s_waitcnt vmcnt(0)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v2
+; GFX9-NEXT:    global_load_dword v2, v[3:4], off
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v2, v7
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v6f16_452367:
@@ -1557,12 +1566,13 @@ define <6 x half> @shuffle_v6f16_452367(ptr addrspace(1) %arg0, ptr addrspace(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v5, v0
 ; GFX10-NEXT:    v_mov_b32_e32 v4, v3
 ; GFX10-NEXT:    v_mov_b32_e32 v3, v2
+; GFX10-NEXT:    ; kill: killed $vgpr3 killed $vgpr4
+; GFX10-NEXT:    ; kill: killed $vgpr5 killed $vgpr6
 ; GFX10-NEXT:    global_load_dwordx3 v[0:2], v[5:6], off
-; GFX10-NEXT:    global_load_dword v7, v[3:4], off
-; GFX10-NEXT:    s_waitcnt vmcnt(1)
+; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v2
+; GFX10-NEXT:    global_load_dword v2, v[3:4], off
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v2, v7
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v6f16_452367:
@@ -1570,11 +1580,10 @@ define <6 x half> @shuffle_v6f16_452367(ptr addrspace(1) %arg0, ptr addrspace(1)
 ; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX11-NEXT:    v_dual_mov_b32 v4, v3 :: v_dual_mov_b32 v3, v2
 ; GFX11-NEXT:    global_load_b96 v[0:2], v[0:1], off
-; GFX11-NEXT:    global_load_b32 v3, v[3:4], off
-; GFX11-NEXT:    s_waitcnt vmcnt(1)
+; GFX11-NEXT:    s_waitcnt vmcnt(0)
 ; GFX11-NEXT:    v_mov_b32_e32 v0, v2
+; GFX11-NEXT:    global_load_b32 v2, v[3:4], off
 ; GFX11-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-NEXT:    v_mov_b32_e32 v2, v3
 ; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %val0 = load <6 x half>, ptr addrspace(1) %arg0
   %val1 = load <6 x half>, ptr addrspace(1) %arg1
@@ -2824,22 +2833,23 @@ define <4 x bfloat> @shuffle_v4bf16_3u6u(ptr addrspace(1) %arg0, ptr addrspace(1
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4bf16_3u6u:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4bf16_3u6u:
@@ -2862,22 +2872,23 @@ define <4 x bfloat> @shuffle_v4bf16_3uu7(ptr addrspace(1) %arg0, ptr addrspace(1
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4bf16_3uu7:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v5, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_alignbit_b32 v0, s4, v5, 16
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4bf16_3uu7:
@@ -3082,23 +3093,24 @@ define <4 x bfloat> @shuffle_v4bf16_0167(ptr addrspace(1) %arg0, ptr addrspace(1
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dwordx2 v[5:6], v[0:1], off
-; GFX9-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_mov_b32 s4, 0xffff
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_bfi_b32 v0, s4, v5, v5
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4bf16_0167:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dwordx2 v[5:6], v[0:1], off
-; GFX10-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_bfi_b32 v0, 0xffff, v5, v5
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4bf16_0167:
@@ -3224,22 +3236,23 @@ define <4 x bfloat> @shuffle_v4bf16_2367(ptr addrspace(1) %arg0, ptr addrspace(1
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dword v4, v[0:1], off offset:4
-; GFX9-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4bf16_2367:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dword v4, v[0:1], off offset:4
-; GFX10-NEXT:    global_load_dword v5, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v1, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v1, v5
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4bf16_2367:
@@ -3405,23 +3418,24 @@ define <4 x bfloat> @shuffle_v4bf16_6701(ptr addrspace(1) %arg0, ptr addrspace(1
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    global_load_dwordx2 v[5:6], v[0:1], off
-; GFX9-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX9-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX9-NEXT:    s_nop 0
+; GFX9-NEXT:    global_load_dword v0, v[2:3], off offset:4
 ; GFX9-NEXT:    s_mov_b32 s4, 0xffff
 ; GFX9-NEXT:    s_waitcnt vmcnt(1)
 ; GFX9-NEXT:    v_bfi_b32 v1, s4, v5, v5
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v4bf16_6701:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    global_load_dwordx2 v[5:6], v[0:1], off
-; GFX10-NEXT:    global_load_dword v4, v[2:3], off offset:4
+; GFX10-NEXT:    ; kill: killed $vgpr0 killed $vgpr1
+; GFX10-NEXT:    global_load_dword v0, v[2:3], off offset:4
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
 ; GFX10-NEXT:    v_bfi_b32 v1, 0xffff, v5, v5
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v0, v4
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v4bf16_6701:
@@ -4195,11 +4209,12 @@ define <6 x bfloat> @shuffle_v6bf16_452367(ptr addrspace(1) %arg0, ptr addrspace
 ; GFX9-NEXT:    v_mov_b32_e32 v4, v3
 ; GFX9-NEXT:    v_mov_b32_e32 v3, v2
 ; GFX9-NEXT:    global_load_dwordx3 v[0:2], v[5:6], off
-; GFX9-NEXT:    global_load_dword v7, v[3:4], off
-; GFX9-NEXT:    s_waitcnt vmcnt(1)
+; GFX9-NEXT:    ; kill: killed $vgpr3 killed $vgpr4
+; GFX9-NEXT:    ; kill: killed $vgpr5 killed $vgpr6
+; GFX9-NEXT:    s_waitcnt vmcnt(0)
 ; GFX9-NEXT:    v_mov_b32_e32 v0, v2
+; GFX9-NEXT:    global_load_dword v2, v[3:4], off
 ; GFX9-NEXT:    s_waitcnt vmcnt(0)
-; GFX9-NEXT:    v_mov_b32_e32 v2, v7
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: shuffle_v6bf16_452367:
@@ -4209,12 +4224,13 @@ define <6 x bfloat> @shuffle_v6bf16_452367(ptr addrspace(1) %arg0, ptr addrspace
 ; GFX10-NEXT:    v_mov_b32_e32 v5, v0
 ; GFX10-NEXT:    v_mov_b32_e32 v4, v3
 ; GFX10-NEXT:    v_mov_b32_e32 v3, v2
+; GFX10-NEXT:    ; kill: killed $vgpr3 killed $vgpr4
+; GFX10-NEXT:    ; kill: killed $vgpr5 killed $vgpr6
 ; GFX10-NEXT:    global_load_dwordx3 v[0:2], v[5:6], off
-; GFX10-NEXT:    global_load_dword v7, v[3:4], off
-; GFX10-NEXT:    s_waitcnt vmcnt(1)
+; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_mov_b32_e32 v0, v2
+; GFX10-NEXT:    global_load_dword v2, v[3:4], off
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
-; GFX10-NEXT:    v_mov_b32_e32 v2, v7
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: shuffle_v6bf16_452367:
@@ -4222,11 +4238,10 @@ define <6 x bfloat> @shuffle_v6bf16_452367(ptr addrspace(1) %arg0, ptr addrspace
 ; GFX11-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX11-NEXT:    v_dual_mov_b32 v4, v3 :: v_dual_mov_b32 v3, v2
 ; GFX11-NEXT:    global_load_b96 v[0:2], v[0:1], off
-; GFX11-NEXT:    global_load_b32 v3, v[3:4], off
-; GFX11-NEXT:    s_waitcnt vmcnt(1)
+; GFX11-NEXT:    s_waitcnt vmcnt(0)
 ; GFX11-NEXT:    v_mov_b32_e32 v0, v2
+; GFX11-NEXT:    global_load_b32 v2, v[3:4], off
 ; GFX11-NEXT:    s_waitcnt vmcnt(0)
-; GFX11-NEXT:    v_mov_b32_e32 v2, v3
 ; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %val0 = load <6 x bfloat>, ptr addrspace(1) %arg0
   %val1 = load <6 x bfloat>, ptr addrspace(1) %arg1
diff --git a/llvm/test/CodeGen/LoongArch/tail-calls.ll b/llvm/test/CodeGen/LoongArch/tail-calls.ll
index 8298d76d8e3a68..e6d2d440dc9ad8 100644
--- a/llvm/test/CodeGen/LoongArch/tail-calls.ll
+++ b/llvm/test/CodeGen/LoongArch/tail-calls.ll
@@ -20,10 +20,9 @@ define void @caller_extern(ptr %src) optsize {
 ; CHECK-LABEL: caller_extern:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    pcalau12i $a1, %got_pc_hi20(dest)
-; CHECK-NEXT:    ld.d $a1, $a1, %got_pc_lo12(dest)
-; CHECK-NEXT:    ori $a2, $zero, 33
 ; CHECK-NEXT:    move $a3, $a0
-; CHECK-NEXT:    move $a0, $a1
+; CHECK-NEXT:    ld.d $a0, $a1, %got_pc_lo12(dest)
+; CHECK-NEXT:    ori $a2, $zero, 33
 ; CHECK-NEXT:    move $a1, $a3
 ; CHECK-NEXT:    b %plt(memcpy)
 entry:
diff --git a/llvm/test/CodeGen/LoongArch/vector-fp-imm.ll b/llvm/test/CodeGen/LoongArch/vector-fp-imm.ll
index 0a401ebe5f6b2a..6f732389626fcf 100644
--- a/llvm/test/CodeGen/LoongArch/vector-fp-imm.ll
+++ b/llvm/test/CodeGen/LoongArch/vector-fp-imm.ll
@@ -454,11 +454,10 @@ define void @test_d2(ptr %P, ptr %S) nounwind {
 ; LA32F-NEXT:    ld.w $fp, $a0, 8
 ; LA32F-NEXT:    ld.w $s0, $a0, 12
 ; LA32F-NEXT:    ld.w $a2, $a0, 0
-; LA32F-NEXT:    ld.w $a4, $a0, 4
 ; LA32F-NEXT:    move $s1, $a1
+; LA32F-NEXT:    ld.w $a1, $a0, 4
 ; LA32F-NEXT:    lu12i.w $a3, 261888
 ; LA32F-NEXT:    move $a0, $a2
-; LA32F-NEXT:    move $a1, $a4
 ; LA32F-NEXT:    move $a2, $zero
 ; LA32F-NEXT:    bl %plt(__adddf3)
 ; LA32F-NEXT:    move $s2, $a0
@@ -565,11 +564,10 @@ define void @test_d4(ptr %P, ptr %S) nounwind {
 ; LA32F-NEXT:    ld.w $s3, $a0, 8
 ; LA32F-NEXT:    ld.w $s4, $a0, 12
 ; LA32F-NEXT:    ld.w $a2, $a0, 0
-; LA32F-NEXT:    ld.w $a4, $a0, 4
 ; LA32F-NEXT:    move $s5, $a1
+; LA32F-NEXT:    ld.w $a1, $a0, 4
 ; LA32F-NEXT:    lu12i.w $a3, 261888
 ; LA32F-NEXT:    move $a0, $a2
-; LA32F-NEXT:    move $a1, $a4
 ; LA32F-NEXT:    move $a2, $zero
 ; LA32F-NEXT:    bl %plt(__adddf3)
 ; LA32F-NEXT:    move $s6, $a0
@@ -754,11 +752,10 @@ define void @test_d8(ptr %P, ptr %S) nounwind {
 ; LA32F-NEXT:    ld.w $s7, $a0, 8
 ; LA32F-NEXT:    ld.w $s0, $a0, 12
 ; LA32F-NEXT:    ld.w $a2, $a0, 0
-; LA32F-NEXT:    ld.w $a4, $a0, 4
 ; LA32F-NEXT:    move $fp, $a1
+; LA32F-NEXT:    ld.w $a1, $a0, 4
 ; LA32F-NEXT:    lu12i.w $a3, 261888
 ; LA32F-NEXT:    move $a0, $a2
-; LA32F-NEXT:    move $a1, $a4
 ; LA32F-NEXT:    move $a2, $zero
 ; LA32F-NEXT:    bl %plt(__adddf3)
 ; LA32F-NEXT:    st.w $a0, $sp, 40 # 4-byte Folded Spill
diff --git a/llvm/test/CodeGen/Mips/llvm-ir/or.ll b/llvm/test/CodeGen/Mips/llvm-ir/or.ll
index c595ff42086b25..7bc37408579722 100644
--- a/llvm/test/CodeGen/Mips/llvm-ir/or.ll
+++ b/llvm/test/CodeGen/Mips/llvm-ir/or.ll
@@ -332,12 +332,11 @@ entry:
 define signext i128 @or_i128_4(i128 signext %b) {
 ; GP32-LABEL: or_i128_4:
 ; GP32:       # %bb.0: # %entry
-; GP32-NEXT:    ori $1, $7, 4
-; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    move $3, $5
-; GP32-NEXT:    move $4, $6
+; GP32-NEXT:    ori $5, $7, 4
+; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    jr $ra
-; GP32-NEXT:    move $5, $1
+; GP32-NEXT:    move $4, $6
 ;
 ; GP64-LABEL: or_i128_4:
 ; GP64:       # %bb.0: # %entry
@@ -347,20 +346,18 @@ define signext i128 @or_i128_4(i128 signext %b) {
 ;
 ; MM32-LABEL: or_i128_4:
 ; MM32:       # %bb.0: # %entry
-; MM32-NEXT:    ori $1, $7, 4
-; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $3, $5
+; MM32-NEXT:    ori $5, $7, 4
+; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $4, $6
-; MM32-NEXT:    move $5, $1
 ; MM32-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: or_i128_4:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    ori $1, $7, 4
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    ori $5, $7, 4
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = or i128 4, %b
@@ -499,12 +496,11 @@ entry:
 define signext i128 @or_i128_31(i128 signext %b) {
 ; GP32-LABEL: or_i128_31:
 ; GP32:       # %bb.0: # %entry
-; GP32-NEXT:    ori $1, $7, 31
-; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    move $3, $5
-; GP32-NEXT:    move $4, $6
+; GP32-NEXT:    ori $5, $7, 31
+; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    jr $ra
-; GP32-NEXT:    move $5, $1
+; GP32-NEXT:    move $4, $6
 ;
 ; GP64-LABEL: or_i128_31:
 ; GP64:       # %bb.0: # %entry
@@ -514,20 +510,18 @@ define signext i128 @or_i128_31(i128 signext %b) {
 ;
 ; MM32-LABEL: or_i128_31:
 ; MM32:       # %bb.0: # %entry
-; MM32-NEXT:    ori $1, $7, 31
-; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $3, $5
+; MM32-NEXT:    ori $5, $7, 31
+; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $4, $6
-; MM32-NEXT:    move $5, $1
 ; MM32-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: or_i128_31:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    ori $1, $7, 31
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    ori $5, $7, 31
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = or i128 31, %b
@@ -666,12 +660,11 @@ entry:
 define signext i128 @or_i128_255(i128 signext %b) {
 ; GP32-LABEL: or_i128_255:
 ; GP32:       # %bb.0: # %entry
-; GP32-NEXT:    ori $1, $7, 255
-; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    move $3, $5
-; GP32-NEXT:    move $4, $6
+; GP32-NEXT:    ori $5, $7, 255
+; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    jr $ra
-; GP32-NEXT:    move $5, $1
+; GP32-NEXT:    move $4, $6
 ;
 ; GP64-LABEL: or_i128_255:
 ; GP64:       # %bb.0: # %entry
@@ -681,20 +674,18 @@ define signext i128 @or_i128_255(i128 signext %b) {
 ;
 ; MM32-LABEL: or_i128_255:
 ; MM32:       # %bb.0: # %entry
-; MM32-NEXT:    ori $1, $7, 255
-; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $3, $5
+; MM32-NEXT:    ori $5, $7, 255
+; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $4, $6
-; MM32-NEXT:    move $5, $1
 ; MM32-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: or_i128_255:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    ori $1, $7, 255
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    ori $5, $7, 255
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = or i128 255, %b
@@ -837,12 +828,11 @@ entry:
 define signext i128 @or_i128_32768(i128 signext %b) {
 ; GP32-LABEL: or_i128_32768:
 ; GP32:       # %bb.0: # %entry
-; GP32-NEXT:    ori $1, $7, 32768
-; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    move $3, $5
-; GP32-NEXT:    move $4, $6
+; GP32-NEXT:    ori $5, $7, 32768
+; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    jr $ra
-; GP32-NEXT:    move $5, $1
+; GP32-NEXT:    move $4, $6
 ;
 ; GP64-LABEL: or_i128_32768:
 ; GP64:       # %bb.0: # %entry
@@ -852,20 +842,18 @@ define signext i128 @or_i128_32768(i128 signext %b) {
 ;
 ; MM32-LABEL: or_i128_32768:
 ; MM32:       # %bb.0: # %entry
-; MM32-NEXT:    ori $1, $7, 32768
-; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $3, $5
+; MM32-NEXT:    ori $5, $7, 32768
+; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $4, $6
-; MM32-NEXT:    move $5, $1
 ; MM32-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: or_i128_32768:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    ori $1, $7, 32768
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    ori $5, $7, 32768
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = or i128 32768, %b
@@ -1004,12 +992,11 @@ entry:
 define signext i128 @or_i128_65(i128 signext %b) {
 ; GP32-LABEL: or_i128_65:
 ; GP32:       # %bb.0: # %entry
-; GP32-NEXT:    ori $1, $7, 65
-; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    move $3, $5
-; GP32-NEXT:    move $4, $6
+; GP32-NEXT:    ori $5, $7, 65
+; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    jr $ra
-; GP32-NEXT:    move $5, $1
+; GP32-NEXT:    move $4, $6
 ;
 ; GP64-LABEL: or_i128_65:
 ; GP64:       # %bb.0: # %entry
@@ -1019,20 +1006,18 @@ define signext i128 @or_i128_65(i128 signext %b) {
 ;
 ; MM32-LABEL: or_i128_65:
 ; MM32:       # %bb.0: # %entry
-; MM32-NEXT:    ori $1, $7, 65
-; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $3, $5
+; MM32-NEXT:    ori $5, $7, 65
+; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $4, $6
-; MM32-NEXT:    move $5, $1
 ; MM32-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: or_i128_65:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    ori $1, $7, 65
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    ori $5, $7, 65
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = or i128 65, %b
@@ -1171,12 +1156,11 @@ entry:
 define signext i128 @or_i128_256(i128 signext %b) {
 ; GP32-LABEL: or_i128_256:
 ; GP32:       # %bb.0: # %entry
-; GP32-NEXT:    ori $1, $7, 256
-; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    move $3, $5
-; GP32-NEXT:    move $4, $6
+; GP32-NEXT:    ori $5, $7, 256
+; GP32-NEXT:    move $2, $4
 ; GP32-NEXT:    jr $ra
-; GP32-NEXT:    move $5, $1
+; GP32-NEXT:    move $4, $6
 ;
 ; GP64-LABEL: or_i128_256:
 ; GP64:       # %bb.0: # %entry
@@ -1186,20 +1170,18 @@ define signext i128 @or_i128_256(i128 signext %b) {
 ;
 ; MM32-LABEL: or_i128_256:
 ; MM32:       # %bb.0: # %entry
-; MM32-NEXT:    ori $1, $7, 256
-; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $3, $5
+; MM32-NEXT:    ori $5, $7, 256
+; MM32-NEXT:    move $2, $4
 ; MM32-NEXT:    move $4, $6
-; MM32-NEXT:    move $5, $1
 ; MM32-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: or_i128_256:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    ori $1, $7, 256
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    ori $5, $7, 256
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = or i128 256, %b
diff --git a/llvm/test/CodeGen/Mips/llvm-ir/xor.ll b/llvm/test/CodeGen/Mips/llvm-ir/xor.ll
index 972e3b6685a658..cb4054cfc5b98e 100644
--- a/llvm/test/CodeGen/Mips/llvm-ir/xor.ll
+++ b/llvm/test/CodeGen/Mips/llvm-ir/xor.ll
@@ -591,30 +591,27 @@ entry:
 define signext i128 @xor_i128_4(i128 signext %b) {
 ; MIPS-LABEL: xor_i128_4:
 ; MIPS:       # %bb.0: # %entry
-; MIPS-NEXT:    xori $1, $7, 4
-; MIPS-NEXT:    move $2, $4
 ; MIPS-NEXT:    move $3, $5
-; MIPS-NEXT:    move $4, $6
+; MIPS-NEXT:    xori $5, $7, 4
+; MIPS-NEXT:    move $2, $4
 ; MIPS-NEXT:    jr $ra
-; MIPS-NEXT:    move $5, $1
+; MIPS-NEXT:    move $4, $6
 ;
 ; MIPS32R2-LABEL: xor_i128_4:
 ; MIPS32R2:       # %bb.0: # %entry
-; MIPS32R2-NEXT:    xori $1, $7, 4
-; MIPS32R2-NEXT:    move $2, $4
 ; MIPS32R2-NEXT:    move $3, $5
-; MIPS32R2-NEXT:    move $4, $6
+; MIPS32R2-NEXT:    xori $5, $7, 4
+; MIPS32R2-NEXT:    move $2, $4
 ; MIPS32R2-NEXT:    jr $ra
-; MIPS32R2-NEXT:    move $5, $1
+; MIPS32R2-NEXT:    move $4, $6
 ;
 ; MIPS32R6-LABEL: xor_i128_4:
 ; MIPS32R6:       # %bb.0: # %entry
-; MIPS32R6-NEXT:    xori $1, $7, 4
-; MIPS32R6-NEXT:    move $2, $4
 ; MIPS32R6-NEXT:    move $3, $5
-; MIPS32R6-NEXT:    move $4, $6
+; MIPS32R6-NEXT:    xori $5, $7, 4
+; MIPS32R6-NEXT:    move $2, $4
 ; MIPS32R6-NEXT:    jr $ra
-; MIPS32R6-NEXT:    move $5, $1
+; MIPS32R6-NEXT:    move $4, $6
 ;
 ; MIPS64-LABEL: xor_i128_4:
 ; MIPS64:       # %bb.0: # %entry
@@ -636,20 +633,18 @@ define signext i128 @xor_i128_4(i128 signext %b) {
 ;
 ; MM32R3-LABEL: xor_i128_4:
 ; MM32R3:       # %bb.0: # %entry
-; MM32R3-NEXT:    xori $1, $7, 4
-; MM32R3-NEXT:    move $2, $4
 ; MM32R3-NEXT:    move $3, $5
+; MM32R3-NEXT:    xori $5, $7, 4
+; MM32R3-NEXT:    move $2, $4
 ; MM32R3-NEXT:    move $4, $6
-; MM32R3-NEXT:    move $5, $1
 ; MM32R3-NEXT:    jrc $ra
 ;
 ; MM32R6-LABEL: xor_i128_4:
 ; MM32R6:       # %bb.0: # %entry
-; MM32R6-NEXT:    xori $1, $7, 4
-; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $3, $5
+; MM32R6-NEXT:    xori $5, $7, 4
+; MM32R6-NEXT:    move $2, $4
 ; MM32R6-NEXT:    move $4, $6
-; MM32R6-NEXT:    move $5, $1
 ; MM32R6-NEXT:    jrc $ra
 entry:
   %r = xor i128 4, %b
diff --git a/llvm/test/CodeGen/PowerPC/BreakableToken-reduced.ll b/llvm/test/CodeGen/PowerPC/BreakableToken-reduced.ll
index 1b62dff024e87f..d98f2610121477 100644
--- a/llvm/test/CodeGen/PowerPC/BreakableToken-reduced.ll
+++ b/llvm/test/CodeGen/PowerPC/BreakableToken-reduced.ll
@@ -225,13 +225,12 @@ define void @_ZN5clang6format22BreakableStringLiteral11insertBreakEjjSt4pairImjE
 ; CHECK-NEXT:    li 11, 1
 ; CHECK-NEXT:    std 0, 160(1)
 ; CHECK-NEXT:    add 5, 6, 5
-; CHECK-NEXT:    lbz 30, 20(3)
 ; CHECK-NEXT:    clrldi 6, 7, 32
+; CHECK-NEXT:    ld 7, 64(3)
+; CHECK-NEXT:    lbz 30, 20(3)
 ; CHECK-NEXT:    iseleq 8, 11, 8
-; CHECK-NEXT:    ld 11, 64(3)
 ; CHECK-NEXT:    add 5, 5, 10
 ; CHECK-NEXT:    clrldi 5, 5, 32
-; CHECK-NEXT:    mr 7, 11
 ; CHECK-NEXT:    sub 0, 4, 8
 ; CHECK-NEXT:    ld 4, 8(3)
 ; CHECK-NEXT:    ld 8, 72(3)
diff --git a/llvm/test/CodeGen/PowerPC/atomics-i128-ldst.ll b/llvm/test/CodeGen/PowerPC/atomics-i128-ldst.ll
index 8967eac223caa1..0e9ce6ac44495b 100644
--- a/llvm/test/CodeGen/PowerPC/atomics-i128-ldst.ll
+++ b/llvm/test/CodeGen/PowerPC/atomics-i128-ldst.ll
@@ -555,9 +555,8 @@ define dso_local void @stqx_unordered(i128 %val, ptr %dst, i64 %idx) {
 ; PPC-PWR8-NEXT:    stw r4, 20(r1)
 ; PPC-PWR8-NEXT:    stw r3, 16(r1)
 ; PPC-PWR8-NEXT:    li r3, 16
-; PPC-PWR8-NEXT:    add r6, r7, r8
-; PPC-PWR8-NEXT:    mr r4, r6
 ; PPC-PWR8-NEXT:    li r6, 0
+; PPC-PWR8-NEXT:    add r4, r7, r8
 ; PPC-PWR8-NEXT:    bl __atomic_store
 ; PPC-PWR8-NEXT:    lwz r0, 36(r1)
 ; PPC-PWR8-NEXT:    addi r1, r1, 32
@@ -624,14 +623,13 @@ define dso_local void @stq_big_offset_unordered(i128 %val, ptr %dst) {
 ; PPC-PWR8-NEXT:    .cfi_def_cfa_offset 32
 ; PPC-PWR8-NEXT:    .cfi_offset lr, 4
 ; PPC-PWR8-NEXT:    stw r6, 28(r1)
-; PPC-PWR8-NEXT:    addis r6, r7, 32
 ; PPC-PWR8-NEXT:    stw r5, 24(r1)
 ; PPC-PWR8-NEXT:    addi r5, r1, 16
+; PPC-PWR8-NEXT:    li r6, 0
 ; PPC-PWR8-NEXT:    stw r4, 20(r1)
+; PPC-PWR8-NEXT:    addis r4, r7, 32
 ; PPC-PWR8-NEXT:    stw r3, 16(r1)
 ; PPC-PWR8-NEXT:    li r3, 16
-; PPC-PWR8-NEXT:    mr r4, r6
-; PPC-PWR8-NEXT:    li r6, 0
 ; PPC-PWR8-NEXT:    bl __atomic_store
 ; PPC-PWR8-NEXT:    lwz r0, 36(r1)
 ; PPC-PWR8-NEXT:    addi r1, r1, 32
diff --git a/llvm/test/CodeGen/PowerPC/legalize-invert-br_cc.ll b/llvm/test/CodeGen/PowerPC/legalize-invert-br_cc.ll
index 1e109d0ea4a7f2..51cbff635bb2b0 100644
--- a/llvm/test/CodeGen/PowerPC/legalize-invert-br_cc.ll
+++ b/llvm/test/CodeGen/PowerPC/legalize-invert-br_cc.ll
@@ -13,9 +13,8 @@ define void @test_fcmpueq_legalize_br_cc_with_invert(float %a) {
 ; CHECK-NEXT:  .LBB0_1: # %l1
 ; CHECK-NEXT:    #
 ; CHECK-NEXT:    efscmplt 7, 3, 4
-; CHECK-NEXT:    efscmpgt 0, 3, 4
 ; CHECK-NEXT:    mfcr 5 # cr7
-; CHECK-NEXT:    mcrf 7, 0
+; CHECK-NEXT:    efscmpgt 7, 3, 4
 ; CHECK-NEXT:    mfcr 6 # cr7
 ; CHECK-NEXT:    rlwinm 5, 5, 30, 31, 31
 ; CHECK-NEXT:    rlwinm 6, 6, 30, 31, 31
diff --git a/llvm/test/CodeGen/PowerPC/lower-intrinsics-afn-mass_notail.ll b/llvm/test/CodeGen/PowerPC/lower-intrinsics-afn-mass_notail.ll
index e7d5d6c3847f83..16521e02b37fe8 100644
--- a/llvm/test/CodeGen/PowerPC/lower-intrinsics-afn-mass_notail.ll
+++ b/llvm/test/CodeGen/PowerPC/lower-intrinsics-afn-mass_notail.ll
@@ -34,8 +34,7 @@ define void @cos_f64(ptr %arg) {
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    lwz 3, L..C0(2) # %const.0
 ; CHECK-AIX-NEXT:    fmr 31, 1
-; CHECK-AIX-NEXT:    lfs 0, 0(3)
-; CHECK-AIX-NEXT:    fmr 1, 0
+; CHECK-AIX-NEXT:    lfs 1, 0(3)
 ; CHECK-AIX-NEXT:    bl .__xl_cos[PR]
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    fmul 0, 31, 1
@@ -90,8 +89,7 @@ define void @log_f64(ptr %arg) {
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    lwz 3, L..C1(2) # %const.0
 ; CHECK-AIX-NEXT:    fmr 31, 1
-; CHECK-AIX-NEXT:    lfs 0, 0(3)
-; CHECK-AIX-NEXT:    fmr 1, 0
+; CHECK-AIX-NEXT:    lfs 1, 0(3)
 ; CHECK-AIX-NEXT:    bl .__xl_log[PR]
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    fmul 0, 31, 1
diff --git a/llvm/test/CodeGen/PowerPC/lower-intrinsics-fast-mass_notail.ll b/llvm/test/CodeGen/PowerPC/lower-intrinsics-fast-mass_notail.ll
index 3fe1b7086c6974..a8d8ef706e40ef 100644
--- a/llvm/test/CodeGen/PowerPC/lower-intrinsics-fast-mass_notail.ll
+++ b/llvm/test/CodeGen/PowerPC/lower-intrinsics-fast-mass_notail.ll
@@ -32,8 +32,7 @@ define void @cos_f64(ptr %arg) {
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    lwz 3, L..C0(2) # %const.0
 ; CHECK-AIX-NEXT:    fmr 31, 1
-; CHECK-AIX-NEXT:    lfs 0, 0(3)
-; CHECK-AIX-NEXT:    fmr 1, 0
+; CHECK-AIX-NEXT:    lfs 1, 0(3)
 ; CHECK-AIX-NEXT:    bl .__xl_cos_finite[PR]
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    fmul 0, 31, 1
@@ -86,8 +85,7 @@ define void @log_f64(ptr %arg) {
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    lwz 3, L..C1(2) # %const.0
 ; CHECK-AIX-NEXT:    fmr 31, 1
-; CHECK-AIX-NEXT:    lfs 0, 0(3)
-; CHECK-AIX-NEXT:    fmr 1, 0
+; CHECK-AIX-NEXT:    lfs 1, 0(3)
 ; CHECK-AIX-NEXT:    bl .__xl_log_finite[PR]
 ; CHECK-AIX-NEXT:    nop
 ; CHECK-AIX-NEXT:    fmul 0, 31, 1
diff --git a/llvm/test/CodeGen/PowerPC/machine-backward-cp.mir b/llvm/test/CodeGen/PowerPC/machine-backward-cp.mir
index 3146683768a474..a1f9180358b147 100644
--- a/llvm/test/CodeGen/PowerPC/machine-backward-cp.mir
+++ b/llvm/test/CodeGen/PowerPC/machine-backward-cp.mir
@@ -12,7 +12,7 @@ body: |
   bb.0.entry:
     ; CHECK-LABEL: name: test0
     ; CHECK: $x3 = LI8 1024
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x4 = LI8 1024
     $x3 = COPY renamable killed $x4
     BLR8 implicit $lr8, implicit undef $rm, implicit $x3
@@ -27,12 +27,14 @@ tracksRegLiveness: true
 body: |
   ; CHECK-LABEL: name: test1
   ; CHECK: bb.0.entry:
-  ; CHECK:   renamable $x4 = LI8 42
-  ; CHECK:   B %bb.1
-  ; CHECK: bb.1:
-  ; CHECK:   liveins: $x4
-  ; CHECK:   $x3 = COPY killed renamable $x4
-  ; CHECK:   BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+  ; CHECK-NEXT:   renamable $x4 = LI8 42
+  ; CHECK-NEXT:   B %bb.1
+  ; CHECK-NEXT: {{  $}}
+  ; CHECK-NEXT: bb.1:
+  ; CHECK-NEXT:   liveins: $x4
+  ; CHECK-NEXT: {{  $}}
+  ; CHECK-NEXT:   $x3 = COPY killed renamable $x4
+  ; CHECK-NEXT:   BLR8 implicit $lr8, implicit undef $rm, implicit $x3
   bb.0.entry:
     successors: %bb.1
 
@@ -56,8 +58,8 @@ body: |
   bb.0.entry:
     ; CHECK-LABEL: name: test2
     ; CHECK: renamable $x4 = LI8 1024
-    ; CHECK: $x13 = COPY killed renamable $x4
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit undef $x3
+    ; CHECK-NEXT: $x13 = COPY killed renamable $x4
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit undef $x3
     renamable $x4 = LI8 1024
     $x13 = COPY renamable killed $x4
     BLR8 implicit $lr8, implicit undef $rm, implicit undef $x3
@@ -73,9 +75,9 @@ body: |
   bb.0.entry:
     ; CHECK-LABEL: name: test3
     ; CHECK: renamable $x4 = LI8 0
-    ; CHECK: renamable $x5 = ADDI8 $x4, 1
-    ; CHECK: $x3 = COPY killed renamable $x4
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: renamable $x5 = ADDI8 $x4, 1
+    ; CHECK-NEXT: $x3 = COPY killed renamable $x4
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x4 = LI8 0
     renamable $x5 = ADDI8 $x4, 1
     $x3 = COPY renamable killed $x4
@@ -94,10 +96,10 @@ body: |
 
     ; CHECK-LABEL: name: test4
     ; CHECK: liveins: $x3
-    ; CHECK: renamable $x4 = LI8 0
-    ; CHECK: renamable $x5 = ADDI8 $x3, 1
-    ; CHECK: $x3 = COPY killed renamable $x4
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x5 = ADDI8 $x3, 1
+    ; CHECK-NEXT: $x3 = LI8 0
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x4 = LI8 0
     renamable $x5 = ADDI8 $x3, 1
     $x3 = COPY renamable killed $x4
@@ -116,10 +118,10 @@ body: |
 
     ; CHECK-LABEL: name: test5
     ; CHECK: liveins: $x3, $x5
-    ; CHECK: renamable $x4 = LI8 0
-    ; CHECK: renamable $x3 = ADDI8 $x5, 1
-    ; CHECK: $x3 = COPY killed renamable $x4
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x3 = ADDI8 $x5, 1
+    ; CHECK-NEXT: $x3 = LI8 0
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x4 = LI8 0
     renamable $x3 = ADDI8 $x5, 1
     $x3 = COPY renamable killed $x4
@@ -137,9 +139,10 @@ body: |
 
     ; CHECK-LABEL: name: iterative_deletion
     ; CHECK: liveins: $x5
-    ; CHECK: renamable $x4 = ADDI8 killed renamable $x5, 1
-    ; CHECK: $x3 = COPY $x4
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x4 = ADDI8 killed renamable $x5, 1
+    ; CHECK-NEXT: $x3 = COPY $x4
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x6 = ADDI8 renamable killed $x5, 1
     renamable $x4 = COPY renamable killed $x6
     renamable $x7 = COPY renamable killed $x4
@@ -157,10 +160,11 @@ body: |
     liveins: $x4, $x7
     ; CHECK-LABEL: name: Enter
     ; CHECK: liveins: $x4, $x7
-    ; CHECK: renamable $x5 = COPY killed renamable $x7
-    ; CHECK: renamable $x7 = ADDI8 killed renamable $x4, 1
-    ; CHECK: $x3 = ADD8 killed renamable $x5, killed renamable $x7
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x5 = COPY killed renamable $x7
+    ; CHECK-NEXT: renamable $x7 = ADDI8 killed renamable $x4, 1
+    ; CHECK-NEXT: $x3 = ADD8 killed renamable $x5, killed renamable $x7
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x5 = COPY killed renamable $x7
     renamable $x6 = ADDI8 killed renamable $x4, 1
     renamable $x7 = COPY killed renamable $x6
@@ -178,12 +182,13 @@ body: |
     liveins: $x4, $x7
     ; CHECK-LABEL: name: foo
     ; CHECK: liveins: $x4, $x7
-    ; CHECK: renamable $x5 = COPY killed renamable $x7
-    ; CHECK: renamable $x7 = ADDI8 renamable $x4, 1
-    ; CHECK: renamable $x6 = ADDI8 killed $x4, 2
-    ; CHECK: $x3 = ADD8 killed renamable $x5, killed renamable $x6
-    ; CHECK: $x3 = ADD8 $x3, killed renamable $x7
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x5 = COPY killed renamable $x7
+    ; CHECK-NEXT: renamable $x7 = ADDI8 renamable $x4, 1
+    ; CHECK-NEXT: renamable $x6 = ADDI8 killed $x4, 2
+    ; CHECK-NEXT: $x3 = ADD8 killed renamable $x5, killed renamable $x6
+    ; CHECK-NEXT: $x3 = ADD8 $x3, killed renamable $x7
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x5 = COPY killed renamable $x7
     renamable $x6 = ADDI8 renamable $x4, 1
     renamable $x7 = COPY killed renamable $x6
@@ -204,13 +209,14 @@ body: |
     liveins: $x4, $x7
     ; CHECK-LABEL: name: bar
     ; CHECK: liveins: $x4, $x7
-    ; CHECK: renamable $x5 = COPY killed renamable $x7
-    ; CHECK: renamable $x7 = ADDI8 renamable $x4, 1
-    ; CHECK: renamable $x8 = COPY killed renamable $x7
-    ; CHECK: renamable $x7 = ADDI8 renamable $x5, 2
-    ; CHECK: $x3 = ADD8 killed renamable $x5, killed renamable $x7
-    ; CHECK: $x3 = ADD8 $x3, killed renamable $x8
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x5 = COPY killed renamable $x7
+    ; CHECK-NEXT: renamable $x7 = ADDI8 renamable $x4, 1
+    ; CHECK-NEXT: renamable $x8 = COPY killed renamable $x7
+    ; CHECK-NEXT: renamable $x7 = ADDI8 renamable $x5, 2
+    ; CHECK-NEXT: $x3 = ADD8 killed renamable $x5, killed renamable $x7
+    ; CHECK-NEXT: $x3 = ADD8 $x3, killed renamable $x8
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x5 = COPY killed renamable $x7
     renamable $x6 = ADDI8 renamable $x4, 1
     renamable $x7 = COPY killed renamable $x6
@@ -232,12 +238,13 @@ body: |
     liveins: $x7
     ; CHECK-LABEL: name: bogus
     ; CHECK: liveins: $x7
-    ; CHECK: renamable $x5 = COPY renamable $x7
-    ; CHECK: renamable $x4 = ADDI8 $x7, 1
-    ; CHECK: renamable $x6 = ADDI8 renamable $x5, 2
-    ; CHECK: $x3 = ADD8 killed renamable $x4, killed renamable $x5
-    ; CHECK: $x3 = ADD8 $x3, killed renamable $x6
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x5 = COPY renamable $x7
+    ; CHECK-NEXT: renamable $x4 = ADDI8 $x7, 1
+    ; CHECK-NEXT: renamable $x6 = ADDI8 renamable $x5, 2
+    ; CHECK-NEXT: $x3 = ADD8 killed renamable $x4, killed renamable $x5
+    ; CHECK-NEXT: $x3 = ADD8 $x3, killed renamable $x6
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x5 = COPY killed renamable $x7
     renamable $x6 = ADDI8 renamable $x5, 1
     renamable $x4 = COPY killed renamable $x6
@@ -259,12 +266,13 @@ body: |
     liveins: $x7
     ; CHECK-LABEL: name: foobar
     ; CHECK: liveins: $x7
-    ; CHECK: renamable $x4 = ADDI8 $x7, 1
-    ; CHECK: renamable $x8 = COPY killed renamable $x4
-    ; CHECK: renamable $x4 = ADDI8 $x7, 2
-    ; CHECK: $x3 = ADD8 killed renamable $x4, $x7
-    ; CHECK: $x3 = ADD8 $x3, killed renamable $x8
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x4 = ADDI8 $x7, 1
+    ; CHECK-NEXT: renamable $x8 = COPY killed renamable $x4
+    ; CHECK-NEXT: renamable $x4 = ADDI8 $x7, 2
+    ; CHECK-NEXT: $x3 = ADD8 killed renamable $x4, $x7
+    ; CHECK-NEXT: $x3 = ADD8 $x3, killed renamable $x8
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x5 = COPY killed renamable $x7
     renamable $x6 = ADDI8 renamable $x5, 1
     renamable $x4 = COPY killed renamable $x6
@@ -286,10 +294,11 @@ body: |
     liveins: $x2, $x3, $x20
     ; CHECK-LABEL: name: cross_call
     ; CHECK: liveins: $x2, $x3, $x20
-    ; CHECK: renamable $x20 = LI8 1024
-    ; CHECK: BL8_NOP @foo, csr_ppc64_altivec, implicit-def $lr8, implicit $rm, implicit $x3, implicit-def $x3, implicit $x2
-    ; CHECK: $x3 = COPY killed renamable $x20
-    ; CHECK: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: renamable $x20 = LI8 1024
+    ; CHECK-NEXT: BL8_NOP @foo, csr_ppc64_altivec, implicit-def $lr8, implicit $rm, implicit $x3, implicit-def $x3, implicit $x2
+    ; CHECK-NEXT: $x3 = COPY killed renamable $x20
+    ; CHECK-NEXT: BLR8 implicit $lr8, implicit undef $rm, implicit $x3
     renamable $x20 = LI8 1024
     BL8_NOP @foo, csr_ppc64_altivec, implicit-def $lr8, implicit $rm, implicit $x3, implicit-def $x3, implicit $x2
     $x3 = COPY renamable killed $x20
diff --git a/llvm/test/CodeGen/PowerPC/stack-restore-with-setjmp.ll b/llvm/test/CodeGen/PowerPC/stack-restore-with-setjmp.ll
index 8748767501bd00..19ac04710614e8 100644
--- a/llvm/test/CodeGen/PowerPC/stack-restore-with-setjmp.ll
+++ b/llvm/test/CodeGen/PowerPC/stack-restore-with-setjmp.ll
@@ -18,10 +18,9 @@ define dso_local signext i32 @main(i32 signext %argc, ptr nocapture readnone %ar
 ; CHECK-NEXT:    stdu 1, -784(1)
 ; CHECK-NEXT:    # kill: def $r3 killed $r3 killed $x3
 ; CHECK-NEXT:    cmpwi 2, 3, 2
-; CHECK-NEXT:    li 4, 0
+; CHECK-NEXT:    li 3, 0
 ; CHECK-NEXT:    std 0, 800(1)
 ; CHECK-NEXT:    mr 31, 1
-; CHECK-NEXT:    mr 3, 4
 ; CHECK-NEXT:    blt 2, .LBB0_3
 ; CHECK-NEXT:  # %bb.1: # %if.end
 ; CHECK-NEXT:    addi 3, 31, 112
diff --git a/llvm/test/CodeGen/RISCV/alu64.ll b/llvm/test/CodeGen/RISCV/alu64.ll
index f032756e007b68..7d02dce8c48d98 100644
--- a/llvm/test/CodeGen/RISCV/alu64.ll
+++ b/llvm/test/CodeGen/RISCV/alu64.ll
@@ -326,9 +326,8 @@ define i64 @sra(i64 %a, i64 %b) nounwind {
 ; RV32I-NEXT:    sra a1, a1, a2
 ; RV32I-NEXT:    bltz a4, .LBB16_2
 ; RV32I-NEXT:  # %bb.1:
-; RV32I-NEXT:    srai a3, a3, 31
 ; RV32I-NEXT:    mv a0, a1
-; RV32I-NEXT:    mv a1, a3
+; RV32I-NEXT:    srai a1, a3, 31
 ; RV32I-NEXT:    ret
 ; RV32I-NEXT:  .LBB16_2:
 ; RV32I-NEXT:    srl a0, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/condops.ll b/llvm/test/CodeGen/RISCV/condops.ll
index 101cb5aeeb0940..bea7f9d9b7de94 100644
--- a/llvm/test/CodeGen/RISCV/condops.ll
+++ b/llvm/test/CodeGen/RISCV/condops.ll
@@ -388,9 +388,8 @@ define i64 @sub1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
 ; RV32I-NEXT:    sltu a5, a1, a3
 ; RV32I-NEXT:    and a0, a0, a4
 ; RV32I-NEXT:    sub a2, a2, a0
-; RV32I-NEXT:    sub a2, a2, a5
 ; RV32I-NEXT:    sub a0, a1, a3
-; RV32I-NEXT:    mv a1, a2
+; RV32I-NEXT:    sub a1, a2, a5
 ; RV32I-NEXT:    ret
 ;
 ; RV64I-LABEL: sub1:
@@ -418,9 +417,8 @@ define i64 @sub1(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
 ; RV32ZICOND-NEXT:    sltu a5, a1, a3
 ; RV32ZICOND-NEXT:    czero.eqz a0, a4, a0
 ; RV32ZICOND-NEXT:    sub a2, a2, a0
-; RV32ZICOND-NEXT:    sub a2, a2, a5
 ; RV32ZICOND-NEXT:    sub a0, a1, a3
-; RV32ZICOND-NEXT:    mv a1, a2
+; RV32ZICOND-NEXT:    sub a1, a2, a5
 ; RV32ZICOND-NEXT:    ret
 ;
 ; RV64ZICOND-LABEL: sub1:
@@ -441,9 +439,8 @@ define i64 @sub2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
 ; RV32I-NEXT:    sltu a5, a1, a3
 ; RV32I-NEXT:    and a0, a0, a4
 ; RV32I-NEXT:    sub a2, a2, a0
-; RV32I-NEXT:    sub a2, a2, a5
 ; RV32I-NEXT:    sub a0, a1, a3
-; RV32I-NEXT:    mv a1, a2
+; RV32I-NEXT:    sub a1, a2, a5
 ; RV32I-NEXT:    ret
 ;
 ; RV64I-LABEL: sub2:
@@ -471,9 +468,8 @@ define i64 @sub2(i1 zeroext %rc, i64 %rs1, i64 %rs2) {
 ; RV32ZICOND-NEXT:    sltu a5, a1, a3
 ; RV32ZICOND-NEXT:    czero.nez a0, a4, a0
 ; RV32ZICOND-NEXT:    sub a2, a2, a0
-; RV32ZICOND-NEXT:    sub a2, a2, a5
 ; RV32ZICOND-NEXT:    sub a0, a1, a3
-; RV32ZICOND-NEXT:    mv a1, a2
+; RV32ZICOND-NEXT:    sub a1, a2, a5
 ; RV32ZICOND-NEXT:    ret
 ;
 ; RV64ZICOND-LABEL: sub2:
diff --git a/llvm/test/CodeGen/RISCV/double-fcmp-strict.ll b/llvm/test/CodeGen/RISCV/double-fcmp-strict.ll
index e864d8fb0eddd5..8f88fbf4c05867 100644
--- a/llvm/test/CodeGen/RISCV/double-fcmp-strict.ll
+++ b/llvm/test/CodeGen/RISCV/double-fcmp-strict.ll
@@ -288,9 +288,8 @@ define i32 @fcmp_one(double %a, double %b) nounwind strictfp {
 ; RV32IZFINXZDINX-NEXT:    csrr a4, fflags
 ; RV32IZFINXZDINX-NEXT:    flt.d a6, a2, a0
 ; RV32IZFINXZDINX-NEXT:    csrw fflags, a4
-; RV32IZFINXZDINX-NEXT:    or a4, a6, a5
 ; RV32IZFINXZDINX-NEXT:    feq.d zero, a2, a0
-; RV32IZFINXZDINX-NEXT:    mv a0, a4
+; RV32IZFINXZDINX-NEXT:    or a0, a6, a5
 ; RV32IZFINXZDINX-NEXT:    ret
 ;
 ; RV64IZFINXZDINX-LABEL: fcmp_one:
@@ -302,9 +301,8 @@ define i32 @fcmp_one(double %a, double %b) nounwind strictfp {
 ; RV64IZFINXZDINX-NEXT:    csrr a2, fflags
 ; RV64IZFINXZDINX-NEXT:    flt.d a4, a1, a0
 ; RV64IZFINXZDINX-NEXT:    csrw fflags, a2
-; RV64IZFINXZDINX-NEXT:    or a2, a4, a3
 ; RV64IZFINXZDINX-NEXT:    feq.d zero, a1, a0
-; RV64IZFINXZDINX-NEXT:    mv a0, a2
+; RV64IZFINXZDINX-NEXT:    or a0, a4, a3
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_one:
@@ -438,9 +436,8 @@ define i32 @fcmp_ueq(double %a, double %b) nounwind strictfp {
 ; RV32IZFINXZDINX-NEXT:    flt.d a6, a2, a0
 ; RV32IZFINXZDINX-NEXT:    csrw fflags, a4
 ; RV32IZFINXZDINX-NEXT:    or a4, a6, a5
-; RV32IZFINXZDINX-NEXT:    xori a4, a4, 1
 ; RV32IZFINXZDINX-NEXT:    feq.d zero, a2, a0
-; RV32IZFINXZDINX-NEXT:    mv a0, a4
+; RV32IZFINXZDINX-NEXT:    xori a0, a4, 1
 ; RV32IZFINXZDINX-NEXT:    ret
 ;
 ; RV64IZFINXZDINX-LABEL: fcmp_ueq:
@@ -453,9 +450,8 @@ define i32 @fcmp_ueq(double %a, double %b) nounwind strictfp {
 ; RV64IZFINXZDINX-NEXT:    flt.d a4, a1, a0
 ; RV64IZFINXZDINX-NEXT:    csrw fflags, a2
 ; RV64IZFINXZDINX-NEXT:    or a3, a4, a3
-; RV64IZFINXZDINX-NEXT:    xori a2, a3, 1
 ; RV64IZFINXZDINX-NEXT:    feq.d zero, a1, a0
-; RV64IZFINXZDINX-NEXT:    mv a0, a2
+; RV64IZFINXZDINX-NEXT:    xori a0, a3, 1
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ueq:
@@ -531,9 +527,8 @@ define i32 @fcmp_ugt(double %a, double %b) nounwind strictfp {
 ; RV32IZFINXZDINX-NEXT:    csrr a4, fflags
 ; RV32IZFINXZDINX-NEXT:    fle.d a5, a0, a2
 ; RV32IZFINXZDINX-NEXT:    csrw fflags, a4
-; RV32IZFINXZDINX-NEXT:    xori a4, a5, 1
 ; RV32IZFINXZDINX-NEXT:    feq.d zero, a0, a2
-; RV32IZFINXZDINX-NEXT:    mv a0, a4
+; RV32IZFINXZDINX-NEXT:    xori a0, a5, 1
 ; RV32IZFINXZDINX-NEXT:    ret
 ;
 ; RV64IZFINXZDINX-LABEL: fcmp_ugt:
@@ -541,9 +536,8 @@ define i32 @fcmp_ugt(double %a, double %b) nounwind strictfp {
 ; RV64IZFINXZDINX-NEXT:    csrr a2, fflags
 ; RV64IZFINXZDINX-NEXT:    fle.d a3, a0, a1
 ; RV64IZFINXZDINX-NEXT:    csrw fflags, a2
-; RV64IZFINXZDINX-NEXT:    xori a2, a3, 1
 ; RV64IZFINXZDINX-NEXT:    feq.d zero, a0, a1
-; RV64IZFINXZDINX-NEXT:    mv a0, a2
+; RV64IZFINXZDINX-NEXT:    xori a0, a3, 1
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ugt:
@@ -585,9 +579,8 @@ define i32 @fcmp_uge(double %a, double %b) nounwind strictfp {
 ; RV32IZFINXZDINX-NEXT:    csrr a4, fflags
 ; RV32IZFINXZDINX-NEXT:    flt.d a5, a0, a2
 ; RV32IZFINXZDINX-NEXT:    csrw fflags, a4
-; RV32IZFINXZDINX-NEXT:    xori a4, a5, 1
 ; RV32IZFINXZDINX-NEXT:    feq.d zero, a0, a2
-; RV32IZFINXZDINX-NEXT:    mv a0, a4
+; RV32IZFINXZDINX-NEXT:    xori a0, a5, 1
 ; RV32IZFINXZDINX-NEXT:    ret
 ;
 ; RV64IZFINXZDINX-LABEL: fcmp_uge:
@@ -595,9 +588,8 @@ define i32 @fcmp_uge(double %a, double %b) nounwind strictfp {
 ; RV64IZFINXZDINX-NEXT:    csrr a2, fflags
 ; RV64IZFINXZDINX-NEXT:    flt.d a3, a0, a1
 ; RV64IZFINXZDINX-NEXT:    csrw fflags, a2
-; RV64IZFINXZDINX-NEXT:    xori a2, a3, 1
 ; RV64IZFINXZDINX-NEXT:    feq.d zero, a0, a1
-; RV64IZFINXZDINX-NEXT:    mv a0, a2
+; RV64IZFINXZDINX-NEXT:    xori a0, a3, 1
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_uge:
@@ -641,9 +633,8 @@ define i32 @fcmp_ult(double %a, double %b) nounwind strictfp {
 ; RV32IZFINXZDINX-NEXT:    csrr a4, fflags
 ; RV32IZFINXZDINX-NEXT:    fle.d a5, a2, a0
 ; RV32IZFINXZDINX-NEXT:    csrw fflags, a4
-; RV32IZFINXZDINX-NEXT:    xori a4, a5, 1
 ; RV32IZFINXZDINX-NEXT:    feq.d zero, a2, a0
-; RV32IZFINXZDINX-NEXT:    mv a0, a4
+; RV32IZFINXZDINX-NEXT:    xori a0, a5, 1
 ; RV32IZFINXZDINX-NEXT:    ret
 ;
 ; RV64IZFINXZDINX-LABEL: fcmp_ult:
@@ -651,9 +642,8 @@ define i32 @fcmp_ult(double %a, double %b) nounwind strictfp {
 ; RV64IZFINXZDINX-NEXT:    csrr a2, fflags
 ; RV64IZFINXZDINX-NEXT:    fle.d a3, a1, a0
 ; RV64IZFINXZDINX-NEXT:    csrw fflags, a2
-; RV64IZFINXZDINX-NEXT:    xori a2, a3, 1
 ; RV64IZFINXZDINX-NEXT:    feq.d zero, a1, a0
-; RV64IZFINXZDINX-NEXT:    mv a0, a2
+; RV64IZFINXZDINX-NEXT:    xori a0, a3, 1
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ult:
@@ -695,9 +685,8 @@ define i32 @fcmp_ule(double %a, double %b) nounwind strictfp {
 ; RV32IZFINXZDINX-NEXT:    csrr a4, fflags
 ; RV32IZFINXZDINX-NEXT:    flt.d a5, a2, a0
 ; RV32IZFINXZDINX-NEXT:    csrw fflags, a4
-; RV32IZFINXZDINX-NEXT:    xori a4, a5, 1
 ; RV32IZFINXZDINX-NEXT:    feq.d zero, a2, a0
-; RV32IZFINXZDINX-NEXT:    mv a0, a4
+; RV32IZFINXZDINX-NEXT:    xori a0, a5, 1
 ; RV32IZFINXZDINX-NEXT:    ret
 ;
 ; RV64IZFINXZDINX-LABEL: fcmp_ule:
@@ -705,9 +694,8 @@ define i32 @fcmp_ule(double %a, double %b) nounwind strictfp {
 ; RV64IZFINXZDINX-NEXT:    csrr a2, fflags
 ; RV64IZFINXZDINX-NEXT:    flt.d a3, a1, a0
 ; RV64IZFINXZDINX-NEXT:    csrw fflags, a2
-; RV64IZFINXZDINX-NEXT:    xori a2, a3, 1
 ; RV64IZFINXZDINX-NEXT:    feq.d zero, a1, a0
-; RV64IZFINXZDINX-NEXT:    mv a0, a2
+; RV64IZFINXZDINX-NEXT:    xori a0, a3, 1
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ule:
diff --git a/llvm/test/CodeGen/RISCV/float-fcmp-strict.ll b/llvm/test/CodeGen/RISCV/float-fcmp-strict.ll
index dae9f3e089cf4a..6c75edb479252c 100644
--- a/llvm/test/CodeGen/RISCV/float-fcmp-strict.ll
+++ b/llvm/test/CodeGen/RISCV/float-fcmp-strict.ll
@@ -247,9 +247,8 @@ define i32 @fcmp_one(float %a, float %b) nounwind strictfp {
 ; CHECKIZFINX-NEXT:    csrr a2, fflags
 ; CHECKIZFINX-NEXT:    flt.s a4, a1, a0
 ; CHECKIZFINX-NEXT:    csrw fflags, a2
-; CHECKIZFINX-NEXT:    or a2, a4, a3
 ; CHECKIZFINX-NEXT:    feq.s zero, a1, a0
-; CHECKIZFINX-NEXT:    mv a0, a2
+; CHECKIZFINX-NEXT:    or a0, a4, a3
 ; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_one:
@@ -368,9 +367,8 @@ define i32 @fcmp_ueq(float %a, float %b) nounwind strictfp {
 ; CHECKIZFINX-NEXT:    flt.s a4, a1, a0
 ; CHECKIZFINX-NEXT:    csrw fflags, a2
 ; CHECKIZFINX-NEXT:    or a3, a4, a3
-; CHECKIZFINX-NEXT:    xori a2, a3, 1
 ; CHECKIZFINX-NEXT:    feq.s zero, a1, a0
-; CHECKIZFINX-NEXT:    mv a0, a2
+; CHECKIZFINX-NEXT:    xori a0, a3, 1
 ; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ueq:
@@ -438,9 +436,8 @@ define i32 @fcmp_ugt(float %a, float %b) nounwind strictfp {
 ; CHECKIZFINX-NEXT:    csrr a2, fflags
 ; CHECKIZFINX-NEXT:    fle.s a3, a0, a1
 ; CHECKIZFINX-NEXT:    csrw fflags, a2
-; CHECKIZFINX-NEXT:    xori a2, a3, 1
 ; CHECKIZFINX-NEXT:    feq.s zero, a0, a1
-; CHECKIZFINX-NEXT:    mv a0, a2
+; CHECKIZFINX-NEXT:    xori a0, a3, 1
 ; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ugt:
@@ -482,9 +479,8 @@ define i32 @fcmp_uge(float %a, float %b) nounwind strictfp {
 ; CHECKIZFINX-NEXT:    csrr a2, fflags
 ; CHECKIZFINX-NEXT:    flt.s a3, a0, a1
 ; CHECKIZFINX-NEXT:    csrw fflags, a2
-; CHECKIZFINX-NEXT:    xori a2, a3, 1
 ; CHECKIZFINX-NEXT:    feq.s zero, a0, a1
-; CHECKIZFINX-NEXT:    mv a0, a2
+; CHECKIZFINX-NEXT:    xori a0, a3, 1
 ; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_uge:
@@ -528,9 +524,8 @@ define i32 @fcmp_ult(float %a, float %b) nounwind strictfp {
 ; CHECKIZFINX-NEXT:    csrr a2, fflags
 ; CHECKIZFINX-NEXT:    fle.s a3, a1, a0
 ; CHECKIZFINX-NEXT:    csrw fflags, a2
-; CHECKIZFINX-NEXT:    xori a2, a3, 1
 ; CHECKIZFINX-NEXT:    feq.s zero, a1, a0
-; CHECKIZFINX-NEXT:    mv a0, a2
+; CHECKIZFINX-NEXT:    xori a0, a3, 1
 ; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ult:
@@ -572,9 +567,8 @@ define i32 @fcmp_ule(float %a, float %b) nounwind strictfp {
 ; CHECKIZFINX-NEXT:    csrr a2, fflags
 ; CHECKIZFINX-NEXT:    flt.s a3, a1, a0
 ; CHECKIZFINX-NEXT:    csrw fflags, a2
-; CHECKIZFINX-NEXT:    xori a2, a3, 1
 ; CHECKIZFINX-NEXT:    feq.s zero, a1, a0
-; CHECKIZFINX-NEXT:    mv a0, a2
+; CHECKIZFINX-NEXT:    xori a0, a3, 1
 ; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcmp_ule:
diff --git a/llvm/test/CodeGen/RISCV/half-fcmp-strict.ll b/llvm/test/CodeGen/RISCV/half-fcmp-strict.ll
index d96c39c504e1fd..f4f1279a8e18ed 100644
--- a/llvm/test/CodeGen/RISCV/half-fcmp-strict.ll
+++ b/llvm/test/CodeGen/RISCV/half-fcmp-strict.ll
@@ -235,9 +235,8 @@ define i32 @fcmp_one(half %a, half %b) nounwind strictfp {
 ; CHECKIZHINX-NEXT:    csrr a2, fflags
 ; CHECKIZHINX-NEXT:    flt.h a4, a1, a0
 ; CHECKIZHINX-NEXT:    csrw fflags, a2
-; CHECKIZHINX-NEXT:    or a2, a4, a3
 ; CHECKIZHINX-NEXT:    feq.h zero, a1, a0
-; CHECKIZHINX-NEXT:    mv a0, a2
+; CHECKIZHINX-NEXT:    or a0, a4, a3
 ; CHECKIZHINX-NEXT:    ret
 ;
 ; CHECKIZFHMIN-LABEL: fcmp_one:
@@ -334,9 +333,8 @@ define i32 @fcmp_ueq(half %a, half %b) nounwind strictfp {
 ; CHECKIZHINX-NEXT:    flt.h a4, a1, a0
 ; CHECKIZHINX-NEXT:    csrw fflags, a2
 ; CHECKIZHINX-NEXT:    or a3, a4, a3
-; CHECKIZHINX-NEXT:    xori a2, a3, 1
 ; CHECKIZHINX-NEXT:    feq.h zero, a1, a0
-; CHECKIZHINX-NEXT:    mv a0, a2
+; CHECKIZHINX-NEXT:    xori a0, a3, 1
 ; CHECKIZHINX-NEXT:    ret
 ;
 ; CHECKIZFHMIN-LABEL: fcmp_ueq:
@@ -388,9 +386,8 @@ define i32 @fcmp_ugt(half %a, half %b) nounwind strictfp {
 ; CHECKIZHINX-NEXT:    csrr a2, fflags
 ; CHECKIZHINX-NEXT:    fle.h a3, a0, a1
 ; CHECKIZHINX-NEXT:    csrw fflags, a2
-; CHECKIZHINX-NEXT:    xori a2, a3, 1
 ; CHECKIZHINX-NEXT:    feq.h zero, a0, a1
-; CHECKIZHINX-NEXT:    mv a0, a2
+; CHECKIZHINX-NEXT:    xori a0, a3, 1
 ; CHECKIZHINX-NEXT:    ret
 ;
 ; CHECKIZFHMIN-LABEL: fcmp_ugt:
@@ -432,9 +429,8 @@ define i32 @fcmp_uge(half %a, half %b) nounwind strictfp {
 ; CHECKIZHINX-NEXT:    csrr a2, fflags
 ; CHECKIZHINX-NEXT:    flt.h a3, a0, a1
 ; CHECKIZHINX-NEXT:    csrw fflags, a2
-; CHECKIZHINX-NEXT:    xori a2, a3, 1
 ; CHECKIZHINX-NEXT:    feq.h zero, a0, a1
-; CHECKIZHINX-NEXT:    mv a0, a2
+; CHECKIZHINX-NEXT:    xori a0, a3, 1
 ; CHECKIZHINX-NEXT:    ret
 ;
 ; CHECKIZFHMIN-LABEL: fcmp_uge:
@@ -476,9 +472,8 @@ define i32 @fcmp_ult(half %a, half %b) nounwind strictfp {
 ; CHECKIZHINX-NEXT:    csrr a2, fflags
 ; CHECKIZHINX-NEXT:    fle.h a3, a1, a0
 ; CHECKIZHINX-NEXT:    csrw fflags, a2
-; CHECKIZHINX-NEXT:    xori a2, a3, 1
 ; CHECKIZHINX-NEXT:    feq.h zero, a1, a0
-; CHECKIZHINX-NEXT:    mv a0, a2
+; CHECKIZHINX-NEXT:    xori a0, a3, 1
 ; CHECKIZHINX-NEXT:    ret
 ;
 ; CHECKIZFHMIN-LABEL: fcmp_ult:
@@ -520,9 +515,8 @@ define i32 @fcmp_ule(half %a, half %b) nounwind strictfp {
 ; CHECKIZHINX-NEXT:    csrr a2, fflags
 ; CHECKIZHINX-NEXT:    flt.h a3, a1, a0
 ; CHECKIZHINX-NEXT:    csrw fflags, a2
-; CHECKIZHINX-NEXT:    xori a2, a3, 1
 ; CHECKIZHINX-NEXT:    feq.h zero, a1, a0
-; CHECKIZHINX-NEXT:    mv a0, a2
+; CHECKIZHINX-NEXT:    xori a0, a3, 1
 ; CHECKIZHINX-NEXT:    ret
 ;
 ; CHECKIZFHMIN-LABEL: fcmp_ule:
diff --git a/llvm/test/CodeGen/RISCV/llvm.frexp.ll b/llvm/test/CodeGen/RISCV/llvm.frexp.ll
index 30f9dd1e516585..eed4e030f6720f 100644
--- a/llvm/test/CodeGen/RISCV/llvm.frexp.ll
+++ b/llvm/test/CodeGen/RISCV/llvm.frexp.ll
@@ -741,10 +741,9 @@ define { <4 x float>, <4 x i32> } @test_frexp_v4f32_v4i32(<4 x float> %a) nounwi
 ; RV32I-NEXT:    lw s0, 12(a1)
 ; RV32I-NEXT:    lw s1, 8(a1)
 ; RV32I-NEXT:    lw s2, 4(a1)
-; RV32I-NEXT:    lw a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lw a0, 0(a1)
 ; RV32I-NEXT:    addi a1, sp, 8
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call frexpf
 ; RV32I-NEXT:    mv s4, a0
 ; RV32I-NEXT:    addi a1, sp, 12
@@ -791,10 +790,9 @@ define { <4 x float>, <4 x i32> } @test_frexp_v4f32_v4i32(<4 x float> %a) nounwi
 ; RV64I-NEXT:    lw s0, 24(a1)
 ; RV64I-NEXT:    lw s1, 16(a1)
 ; RV64I-NEXT:    lw s2, 8(a1)
-; RV64I-NEXT:    lw a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lw a0, 0(a1)
 ; RV64I-NEXT:    mv a1, sp
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call frexpf
 ; RV64I-NEXT:    mv s4, a0
 ; RV64I-NEXT:    addi a1, sp, 4
@@ -1009,10 +1007,9 @@ define <4 x float> @test_frexp_v4f32_v4i32_only_use_fract(<4 x float> %a) nounwi
 ; RV32I-NEXT:    lw s0, 12(a1)
 ; RV32I-NEXT:    lw s1, 8(a1)
 ; RV32I-NEXT:    lw s2, 4(a1)
-; RV32I-NEXT:    lw a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lw a0, 0(a1)
 ; RV32I-NEXT:    addi a1, sp, 8
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call frexpf
 ; RV32I-NEXT:    mv s4, a0
 ; RV32I-NEXT:    addi a1, sp, 12
@@ -1051,10 +1048,9 @@ define <4 x float> @test_frexp_v4f32_v4i32_only_use_fract(<4 x float> %a) nounwi
 ; RV64I-NEXT:    lw s0, 24(a1)
 ; RV64I-NEXT:    lw s1, 16(a1)
 ; RV64I-NEXT:    lw s2, 8(a1)
-; RV64I-NEXT:    lw a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lw a0, 0(a1)
 ; RV64I-NEXT:    mv a1, sp
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call frexpf
 ; RV64I-NEXT:    mv s4, a0
 ; RV64I-NEXT:    addi a1, sp, 4
@@ -1257,10 +1253,9 @@ define <4 x i32> @test_frexp_v4f32_v4i32_only_use_exp(<4 x float> %a) nounwind {
 ; RV32I-NEXT:    lw s0, 12(a1)
 ; RV32I-NEXT:    lw s1, 8(a1)
 ; RV32I-NEXT:    lw s2, 4(a1)
-; RV32I-NEXT:    lw a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lw a0, 0(a1)
 ; RV32I-NEXT:    addi a1, sp, 12
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call frexpf
 ; RV32I-NEXT:    addi a1, sp, 16
 ; RV32I-NEXT:    mv a0, s2
@@ -1298,10 +1293,9 @@ define <4 x i32> @test_frexp_v4f32_v4i32_only_use_exp(<4 x float> %a) nounwind {
 ; RV64I-NEXT:    lw s0, 24(a1)
 ; RV64I-NEXT:    lw s1, 16(a1)
 ; RV64I-NEXT:    lw s2, 8(a1)
-; RV64I-NEXT:    lw a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lw a0, 0(a1)
 ; RV64I-NEXT:    addi a1, sp, 8
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call frexpf
 ; RV64I-NEXT:    addi a1, sp, 12
 ; RV64I-NEXT:    mv a0, s2
diff --git a/llvm/test/CodeGen/RISCV/neg-abs.ll b/llvm/test/CodeGen/RISCV/neg-abs.ll
index 6f301882b452c0..48f725dba5dccb 100644
--- a/llvm/test/CodeGen/RISCV/neg-abs.ll
+++ b/llvm/test/CodeGen/RISCV/neg-abs.ll
@@ -211,10 +211,9 @@ define i64 @neg_abs64_multiuse(i64 %x, ptr %y) {
 ; RV32I-NEXT:    sw a0, 0(a2)
 ; RV32I-NEXT:    snez a3, a0
 ; RV32I-NEXT:    neg a4, a1
-; RV32I-NEXT:    sub a3, a4, a3
-; RV32I-NEXT:    neg a0, a0
 ; RV32I-NEXT:    sw a1, 4(a2)
-; RV32I-NEXT:    mv a1, a3
+; RV32I-NEXT:    sub a1, a4, a3
+; RV32I-NEXT:    neg a0, a0
 ; RV32I-NEXT:    ret
 ;
 ; RV32ZBB-LABEL: neg_abs64_multiuse:
@@ -229,10 +228,9 @@ define i64 @neg_abs64_multiuse(i64 %x, ptr %y) {
 ; RV32ZBB-NEXT:    sw a0, 0(a2)
 ; RV32ZBB-NEXT:    snez a3, a0
 ; RV32ZBB-NEXT:    neg a4, a1
-; RV32ZBB-NEXT:    sub a3, a4, a3
-; RV32ZBB-NEXT:    neg a0, a0
 ; RV32ZBB-NEXT:    sw a1, 4(a2)
-; RV32ZBB-NEXT:    mv a1, a3
+; RV32ZBB-NEXT:    sub a1, a4, a3
+; RV32ZBB-NEXT:    neg a0, a0
 ; RV32ZBB-NEXT:    ret
 ;
 ; RV64I-LABEL: neg_abs64_multiuse:
diff --git a/llvm/test/CodeGen/RISCV/rv64-statepoint-call-lowering.ll b/llvm/test/CodeGen/RISCV/rv64-statepoint-call-lowering.ll
index 2fa344d4d79a7f..15c1f4325f117e 100644
--- a/llvm/test/CodeGen/RISCV/rv64-statepoint-call-lowering.ll
+++ b/llvm/test/CodeGen/RISCV/rv64-statepoint-call-lowering.ll
@@ -186,9 +186,8 @@ define i1 @test_cross_bb(ptr addrspace(1) %a, i1 %external_cond) gc "statepoint-
 ; CHECK-NEXT:  .Ltmp8:
 ; CHECK-NEXT:    beqz s0, .LBB8_2
 ; CHECK-NEXT:  # %bb.1: # %left
-; CHECK-NEXT:    ld a1, 8(sp)
 ; CHECK-NEXT:    mv s0, a0
-; CHECK-NEXT:    mv a0, a1
+; CHECK-NEXT:    ld a0, 8(sp)
 ; CHECK-NEXT:    call consume
 ; CHECK-NEXT:    mv a0, s0
 ; CHECK-NEXT:    j .LBB8_3
diff --git a/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll b/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll
index 7839b602706db1..3c0888fa170361 100644
--- a/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/constant-folding-crash.ll
@@ -23,9 +23,8 @@ define void @constant_folding_crash(ptr %v54, <4 x ptr> %lanes.a, <4 x ptr> %lan
 ; RV32-NEXT:    seqz a0, a0
 ; RV32-NEXT:    vsetivli zero, 4, e8, mf4, ta, ma
 ; RV32-NEXT:    vmv.v.x v10, a0
-; RV32-NEXT:    vmsne.vi v10, v10, 0
 ; RV32-NEXT:    vmv1r.v v11, v0
-; RV32-NEXT:    vmv1r.v v0, v10
+; RV32-NEXT:    vmsne.vi v0, v10, 0
 ; RV32-NEXT:    vsetvli zero, zero, e32, m1, ta, ma
 ; RV32-NEXT:    vmerge.vvm v8, v9, v8, v0
 ; RV32-NEXT:    vmv.x.s a0, v8
@@ -47,9 +46,8 @@ define void @constant_folding_crash(ptr %v54, <4 x ptr> %lanes.a, <4 x ptr> %lan
 ; RV64-NEXT:    seqz a0, a0
 ; RV64-NEXT:    vsetivli zero, 4, e8, mf4, ta, ma
 ; RV64-NEXT:    vmv.v.x v12, a0
-; RV64-NEXT:    vmsne.vi v12, v12, 0
 ; RV64-NEXT:    vmv1r.v v13, v0
-; RV64-NEXT:    vmv1r.v v0, v12
+; RV64-NEXT:    vmsne.vi v0, v12, 0
 ; RV64-NEXT:    vsetvli zero, zero, e64, m2, ta, ma
 ; RV64-NEXT:    vmerge.vvm v8, v10, v8, v0
 ; RV64-NEXT:    vmv.x.s a0, v8
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
index 2e5b67c93fce1a..aa0a2e4a52f317 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfeq.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f16_nxv1f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmfeq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfeq.vv v0, v8, v9
 ; CHECK-NEXT:    vmfeq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f16_nxv2f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmfeq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfeq.vv v0, v8, v9
 ; CHECK-NEXT:    vmfeq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv4f16_nxv4f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmfeq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfeq.vv v0, v8, v9
 ; CHECK-NEXT:    vmfeq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -295,9 +292,8 @@ define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f32_nxv1f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmfeq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfeq.vv v0, v8, v9
 ; CHECK-NEXT:    vmfeq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 2 x i1> @intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv2f32_nxv2f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmfeq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfeq.vv v0, v8, v9
 ; CHECK-NEXT:    vmfeq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -503,9 +498,8 @@ define <vscale x 1 x i1> @intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfeq_mask_vv_nxv1f64_nxv1f64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmfeq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfeq.vv v0, v8, v9
 ; CHECK-NEXT:    vmfeq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
index b5ca47707c8a82..1bc25a72c63762 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfge.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f16_nxv1f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v9, v8
 ; CHECK-NEXT:    vmfle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f16_nxv2f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v9, v8
 ; CHECK-NEXT:    vmfle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv4f16_nxv4f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v9, v8
 ; CHECK-NEXT:    vmfle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -295,9 +292,8 @@ define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f32_nxv1f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v9, v8
 ; CHECK-NEXT:    vmfle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 2 x i1> @intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv2f32_nxv2f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v9, v8
 ; CHECK-NEXT:    vmfle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -503,9 +498,8 @@ define <vscale x 1 x i1> @intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfge_mask_vv_nxv1f64_nxv1f64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v9, v8
 ; CHECK-NEXT:    vmfle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
index 971249d38d1b26..09629b6aa7f05d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfgt.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f16_nxv1f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v9, v8
 ; CHECK-NEXT:    vmflt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f16_nxv2f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v9, v8
 ; CHECK-NEXT:    vmflt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv4f16_nxv4f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v9, v8
 ; CHECK-NEXT:    vmflt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -295,9 +292,8 @@ define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f32_nxv1f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v9, v8
 ; CHECK-NEXT:    vmflt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 2 x i1> @intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv2f32_nxv2f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v9, v8
 ; CHECK-NEXT:    vmflt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -503,9 +498,8 @@ define <vscale x 1 x i1> @intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfgt_mask_vv_nxv1f64_nxv1f64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v9, v8
 ; CHECK-NEXT:    vmflt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
index f19a181a365afc..7b45abb5a42660 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfle.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f16_nxv1f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v8, v9
 ; CHECK-NEXT:    vmfle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f16_nxv2f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v8, v9
 ; CHECK-NEXT:    vmfle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv4f16_nxv4f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v8, v9
 ; CHECK-NEXT:    vmfle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -295,9 +292,8 @@ define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f32_nxv1f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v8, v9
 ; CHECK-NEXT:    vmfle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 2 x i1> @intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv2f32_nxv2f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v8, v9
 ; CHECK-NEXT:    vmfle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -503,9 +498,8 @@ define <vscale x 1 x i1> @intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfle_mask_vv_nxv1f64_nxv1f64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmfle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfle.vv v0, v8, v9
 ; CHECK-NEXT:    vmfle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
index 0a046422193342..cea5a388c749b4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmflt.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f16_nxv1f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v8, v9
 ; CHECK-NEXT:    vmflt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f16_nxv2f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v8, v9
 ; CHECK-NEXT:    vmflt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv4f16_nxv4f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v8, v9
 ; CHECK-NEXT:    vmflt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -295,9 +292,8 @@ define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f32_nxv1f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v8, v9
 ; CHECK-NEXT:    vmflt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 2 x i1> @intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv2f32_nxv2f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v8, v9
 ; CHECK-NEXT:    vmflt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -503,9 +498,8 @@ define <vscale x 1 x i1> @intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmflt_mask_vv_nxv1f64_nxv1f64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmflt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmflt.vv v0, v8, v9
 ; CHECK-NEXT:    vmflt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
index 520099247e0f3d..6e216757ae1b2b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmfne.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f16_nxv1f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmfne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfne.vv v0, v8, v9
 ; CHECK-NEXT:    vmfne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f16_nxv2f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmfne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfne.vv v0, v8, v9
 ; CHECK-NEXT:    vmfne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv4f16_nxv4f16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmfne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfne.vv v0, v8, v9
 ; CHECK-NEXT:    vmfne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -295,9 +292,8 @@ define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f32_nxv1f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmfne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmfne.vv v0, v8, v9
 ; CHECK-NEXT:    vmfne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 2 x i1> @intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv2f32_nxv2f32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmfne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfne.vv v0, v8, v9
 ; CHECK-NEXT:    vmfne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -503,9 +498,8 @@ define <vscale x 1 x i1> @intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmfne_mask_vv_nxv1f64_nxv1f64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmfne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmfne.vv v0, v8, v9
 ; CHECK-NEXT:    vmfne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
index 9f181f7a30ebed..b9fa97e199ea67 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmseq.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmseq_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmseq.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmseq.vv v0, v8, v9
 ; CHECK-NEXT:    vmseq.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
index 75fc407abbc2f3..0815fca81042ae 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsge.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsge_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v9, v8
 ; CHECK-NEXT:    vmsle.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
index 5568c1e9b1cfb9..07adbad865580a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgeu.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgeu_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsleu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
index f1fa6484d976b4..c43c1643478518 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgt.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgt_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v9, v8
 ; CHECK-NEXT:    vmslt.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
index de7a0ad87be27c..9cef421e43bf4d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsgtu.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsgtu_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v9, v8
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v9, v8
 ; CHECK-NEXT:    vmsltu.vv v11, v10, v9, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
index f54aef3ed4052c..2d0e2e52159b90 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsle.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsle_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsle.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsle.vv v0, v8, v9
 ; CHECK-NEXT:    vmsle.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
index 540577247484e3..7a7f7bf8dc3848 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsleu.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsleu_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsleu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsleu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsleu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
index 554d25172d4fde..ecdf673f45f4be 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmslt.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmslt_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmslt.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmslt.vv v0, v8, v9
 ; CHECK-NEXT:    vmslt.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
index 7a8efa6c80fb6b..e5c8800caa5f1d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsltu.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsltu_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsltu.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsltu.vv v0, v8, v9
 ; CHECK-NEXT:    vmsltu.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
index bd6bd8a804bcc2..b657247ee9249b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vmsne.ll
@@ -35,9 +35,8 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8(<vscale x 1 x i1
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i8_nxv1i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf8, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -87,9 +86,8 @@ define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8(<vscale x 2 x i1
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i8_nxv2i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf4, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -139,9 +137,8 @@ define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8(<vscale x 4 x i1
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i8_nxv4i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, mf2, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -191,9 +188,8 @@ define <vscale x 8 x i1> @intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8(<vscale x 8 x i1
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv8i8_nxv8i8:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -347,9 +343,8 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i16_nxv1i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf4, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -399,9 +394,8 @@ define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i16_nxv2i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, mf2, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -451,9 +445,8 @@ define <vscale x 4 x i1> @intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16(<vscale x 4 x
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv4i16_nxv4i16:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m1, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -607,9 +600,8 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i32_nxv1i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, mf2, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv1r.v v0, v11
 ; CHECK-NEXT:    ret
@@ -659,9 +651,8 @@ define <vscale x 2 x i1> @intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32(<vscale x 2 x
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv2i32_nxv2i32:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e32, m1, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
@@ -815,9 +806,8 @@ define <vscale x 1 x i1> @intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64(<vscale x 1 x
 ; CHECK-LABEL: intrinsic_vmsne_mask_vv_nxv1i64_nxv1i64:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, mu
-; CHECK-NEXT:    vmsne.vv v8, v8, v9
 ; CHECK-NEXT:    vmv1r.v v11, v0
-; CHECK-NEXT:    vmv.v.v v0, v8
+; CHECK-NEXT:    vmsne.vv v0, v8, v9
 ; CHECK-NEXT:    vmsne.vv v11, v9, v10, v0.t
 ; CHECK-NEXT:    vmv.v.v v0, v11
 ; CHECK-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-regression.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-regression.ll
index c3b19b59ec3d68..40c3401363dda2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-regression.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-regression.ll
@@ -12,9 +12,8 @@ define i32 @illegal_preserve_vl(<vscale x 2 x i32> %a, <vscale x 4 x i64> %x, pt
 ; CHECK-NEXT:    vsetvli a1, zero, e64, m4, ta, ma
 ; CHECK-NEXT:    vadd.vv v12, v12, v12
 ; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
-; CHECK-NEXT:    vmv.x.s a1, v8
 ; CHECK-NEXT:    vs4r.v v12, (a0)
-; CHECK-NEXT:    mv a0, a1
+; CHECK-NEXT:    vmv.x.s a0, v8
 ; CHECK-NEXT:    ret
   %index = add <vscale x 4 x i64> %x, %x
   store <vscale x 4 x i64> %index, ptr %y
diff --git a/llvm/test/CodeGen/RISCV/shifts.ll b/llvm/test/CodeGen/RISCV/shifts.ll
index f61cbfd3ed7257..6910f71e7271a4 100644
--- a/llvm/test/CodeGen/RISCV/shifts.ll
+++ b/llvm/test/CodeGen/RISCV/shifts.ll
@@ -64,9 +64,8 @@ define i64 @ashr64(i64 %a, i64 %b) nounwind {
 ; RV32I-NEXT:    sra a1, a1, a2
 ; RV32I-NEXT:    bltz a4, .LBB2_2
 ; RV32I-NEXT:  # %bb.1:
-; RV32I-NEXT:    srai a3, a3, 31
 ; RV32I-NEXT:    mv a0, a1
-; RV32I-NEXT:    mv a1, a3
+; RV32I-NEXT:    srai a1, a3, 31
 ; RV32I-NEXT:    ret
 ; RV32I-NEXT:  .LBB2_2:
 ; RV32I-NEXT:    srl a0, a0, a2
@@ -421,9 +420,8 @@ define i128 @ashr128(i128 %a, i128 %b) nounwind {
 ; RV64I-NEXT:    sra a1, a1, a2
 ; RV64I-NEXT:    bltz a4, .LBB7_2
 ; RV64I-NEXT:  # %bb.1:
-; RV64I-NEXT:    srai a3, a3, 63
 ; RV64I-NEXT:    mv a0, a1
-; RV64I-NEXT:    mv a1, a3
+; RV64I-NEXT:    srai a1, a3, 63
 ; RV64I-NEXT:    ret
 ; RV64I-NEXT:  .LBB7_2:
 ; RV64I-NEXT:    srl a0, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll b/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
index 7fc4713ac2d6e1..55ccb4d130c860 100644
--- a/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
+++ b/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
@@ -21,10 +21,9 @@ define <4 x i16> @fold_srem_vec_1(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    lh s0, 12(a1)
 ; RV32I-NEXT:    lh s1, 8(a1)
 ; RV32I-NEXT:    lh s2, 4(a1)
-; RV32I-NEXT:    lh a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lh a0, 0(a1)
 ; RV32I-NEXT:    li a1, 95
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __modsi3
 ; RV32I-NEXT:    mv s4, a0
 ; RV32I-NEXT:    li a1, -124
@@ -113,10 +112,9 @@ define <4 x i16> @fold_srem_vec_1(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    lh s0, 24(a1)
 ; RV64I-NEXT:    lh s1, 16(a1)
 ; RV64I-NEXT:    lh s2, 8(a1)
-; RV64I-NEXT:    lh a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lh a0, 0(a1)
 ; RV64I-NEXT:    li a1, 95
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __moddi3
 ; RV64I-NEXT:    mv s4, a0
 ; RV64I-NEXT:    li a1, -124
@@ -209,10 +207,9 @@ define <4 x i16> @fold_srem_vec_2(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    lh s0, 12(a1)
 ; RV32I-NEXT:    lh s1, 8(a1)
 ; RV32I-NEXT:    lh s2, 4(a1)
-; RV32I-NEXT:    lh a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lh a0, 0(a1)
 ; RV32I-NEXT:    li a1, 95
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __modsi3
 ; RV32I-NEXT:    mv s4, a0
 ; RV32I-NEXT:    li a1, 95
@@ -294,10 +291,9 @@ define <4 x i16> @fold_srem_vec_2(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    lh s0, 24(a1)
 ; RV64I-NEXT:    lh s1, 16(a1)
 ; RV64I-NEXT:    lh s2, 8(a1)
-; RV64I-NEXT:    lh a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lh a0, 0(a1)
 ; RV64I-NEXT:    li a1, 95
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __moddi3
 ; RV64I-NEXT:    mv s4, a0
 ; RV64I-NEXT:    li a1, 95
@@ -775,10 +771,9 @@ define <4 x i16> @dont_fold_srem_one(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    sw s3, 12(sp) # 4-byte Folded Spill
 ; RV32I-NEXT:    lh s0, 12(a1)
 ; RV32I-NEXT:    lh s1, 8(a1)
-; RV32I-NEXT:    lh a2, 4(a1)
 ; RV32I-NEXT:    mv s2, a0
+; RV32I-NEXT:    lh a0, 4(a1)
 ; RV32I-NEXT:    li a1, 654
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __modsi3
 ; RV32I-NEXT:    mv s3, a0
 ; RV32I-NEXT:    li a1, 23
@@ -852,10 +847,9 @@ define <4 x i16> @dont_fold_srem_one(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    sd s3, 8(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    lh s0, 24(a1)
 ; RV64I-NEXT:    lh s1, 16(a1)
-; RV64I-NEXT:    lh a2, 8(a1)
 ; RV64I-NEXT:    mv s2, a0
+; RV64I-NEXT:    lh a0, 8(a1)
 ; RV64I-NEXT:    li a1, 654
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __moddi3
 ; RV64I-NEXT:    mv s3, a0
 ; RV64I-NEXT:    li a1, 23
@@ -1091,11 +1085,10 @@ define <4 x i64> @dont_fold_srem_i64(<4 x i64> %x) nounwind {
 ; RV32I-NEXT:    lw s3, 20(a1)
 ; RV32I-NEXT:    lw s4, 8(a1)
 ; RV32I-NEXT:    lw s5, 12(a1)
-; RV32I-NEXT:    lw a3, 0(a1)
-; RV32I-NEXT:    lw a1, 4(a1)
 ; RV32I-NEXT:    mv s6, a0
+; RV32I-NEXT:    lw a0, 0(a1)
+; RV32I-NEXT:    lw a1, 4(a1)
 ; RV32I-NEXT:    li a2, 1
-; RV32I-NEXT:    mv a0, a3
 ; RV32I-NEXT:    li a3, 0
 ; RV32I-NEXT:    call __moddi3
 ; RV32I-NEXT:    mv s7, a0
@@ -1160,11 +1153,10 @@ define <4 x i64> @dont_fold_srem_i64(<4 x i64> %x) nounwind {
 ; RV32IM-NEXT:    lw s3, 20(a1)
 ; RV32IM-NEXT:    lw s4, 8(a1)
 ; RV32IM-NEXT:    lw s5, 12(a1)
-; RV32IM-NEXT:    lw a3, 0(a1)
-; RV32IM-NEXT:    lw a1, 4(a1)
 ; RV32IM-NEXT:    mv s6, a0
+; RV32IM-NEXT:    lw a0, 0(a1)
+; RV32IM-NEXT:    lw a1, 4(a1)
 ; RV32IM-NEXT:    li a2, 1
-; RV32IM-NEXT:    mv a0, a3
 ; RV32IM-NEXT:    li a3, 0
 ; RV32IM-NEXT:    call __moddi3
 ; RV32IM-NEXT:    mv s7, a0
@@ -1220,10 +1212,9 @@ define <4 x i64> @dont_fold_srem_i64(<4 x i64> %x) nounwind {
 ; RV64I-NEXT:    sd s3, 8(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    ld s0, 24(a1)
 ; RV64I-NEXT:    ld s1, 16(a1)
-; RV64I-NEXT:    ld a2, 8(a1)
 ; RV64I-NEXT:    mv s2, a0
+; RV64I-NEXT:    ld a0, 8(a1)
 ; RV64I-NEXT:    li a1, 654
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __moddi3
 ; RV64I-NEXT:    mv s3, a0
 ; RV64I-NEXT:    li a1, 23
diff --git a/llvm/test/CodeGen/RISCV/tail-calls.ll b/llvm/test/CodeGen/RISCV/tail-calls.ll
index d3e495bb723ad8..818f803e2da18f 100644
--- a/llvm/test/CodeGen/RISCV/tail-calls.ll
+++ b/llvm/test/CodeGen/RISCV/tail-calls.ll
@@ -19,11 +19,10 @@ declare void @llvm.memcpy.p0.p0.i32(ptr, ptr, i32, i1)
 define void @caller_extern(ptr %src) optsize {
 ; CHECK-LABEL: caller_extern:
 ; CHECK:       # %bb.0: # %entry
-; CHECK-NEXT:    lui a1, %hi(dest)
-; CHECK-NEXT:    addi a1, a1, %lo(dest)
-; CHECK-NEXT:    li a2, 7
 ; CHECK-NEXT:    mv a3, a0
-; CHECK-NEXT:    mv a0, a1
+; CHECK-NEXT:    lui a0, %hi(dest)
+; CHECK-NEXT:    addi a0, a0, %lo(dest)
+; CHECK-NEXT:    li a2, 7
 ; CHECK-NEXT:    mv a1, a3
 ; CHECK-NEXT:    tail memcpy
 entry:
@@ -36,11 +35,10 @@ entry:
 define void @caller_extern_pgso(ptr %src) !prof !14 {
 ; CHECK-LABEL: caller_extern_pgso:
 ; CHECK:       # %bb.0: # %entry
-; CHECK-NEXT:    lui a1, %hi(dest_pgso)
-; CHECK-NEXT:    addi a1, a1, %lo(dest_pgso)
-; CHECK-NEXT:    li a2, 7
 ; CHECK-NEXT:    mv a3, a0
-; CHECK-NEXT:    mv a0, a1
+; CHECK-NEXT:    lui a0, %hi(dest_pgso)
+; CHECK-NEXT:    addi a0, a0, %lo(dest_pgso)
+; CHECK-NEXT:    li a2, 7
 ; CHECK-NEXT:    mv a1, a3
 ; CHECK-NEXT:    tail memcpy
 entry:
diff --git a/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll b/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
index 540883fdc517a5..f829eb7b30917d 100644
--- a/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
+++ b/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
@@ -22,10 +22,9 @@ define <4 x i16> @fold_urem_vec_1(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    lhu s0, 12(a1)
 ; RV32I-NEXT:    lhu s1, 8(a1)
 ; RV32I-NEXT:    lhu s2, 4(a1)
-; RV32I-NEXT:    lhu a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lhu a0, 0(a1)
 ; RV32I-NEXT:    li a1, 95
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __umodsi3
 ; RV32I-NEXT:    mv s4, a0
 ; RV32I-NEXT:    li a1, 124
@@ -100,10 +99,9 @@ define <4 x i16> @fold_urem_vec_1(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    lhu s0, 24(a1)
 ; RV64I-NEXT:    lhu s1, 16(a1)
 ; RV64I-NEXT:    lhu s2, 8(a1)
-; RV64I-NEXT:    lhu a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lhu a0, 0(a1)
 ; RV64I-NEXT:    li a1, 95
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __umoddi3
 ; RV64I-NEXT:    mv s4, a0
 ; RV64I-NEXT:    li a1, 124
@@ -182,10 +180,9 @@ define <4 x i16> @fold_urem_vec_2(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    lhu s0, 12(a1)
 ; RV32I-NEXT:    lhu s1, 8(a1)
 ; RV32I-NEXT:    lhu s2, 4(a1)
-; RV32I-NEXT:    lhu a2, 0(a1)
 ; RV32I-NEXT:    mv s3, a0
+; RV32I-NEXT:    lhu a0, 0(a1)
 ; RV32I-NEXT:    li a1, 95
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __umodsi3
 ; RV32I-NEXT:    mv s4, a0
 ; RV32I-NEXT:    li a1, 95
@@ -251,10 +248,9 @@ define <4 x i16> @fold_urem_vec_2(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    lhu s0, 24(a1)
 ; RV64I-NEXT:    lhu s1, 16(a1)
 ; RV64I-NEXT:    lhu s2, 8(a1)
-; RV64I-NEXT:    lhu a2, 0(a1)
 ; RV64I-NEXT:    mv s3, a0
+; RV64I-NEXT:    lhu a0, 0(a1)
 ; RV64I-NEXT:    li a1, 95
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __umoddi3
 ; RV64I-NEXT:    mv s4, a0
 ; RV64I-NEXT:    li a1, 95
@@ -534,10 +530,9 @@ define <4 x i16> @dont_fold_urem_power_of_two(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    lhu s1, 8(a1)
 ; RV32I-NEXT:    lhu s2, 4(a1)
 ; RV32I-NEXT:    lhu s3, 0(a1)
-; RV32I-NEXT:    lhu a2, 12(a1)
 ; RV32I-NEXT:    mv s0, a0
+; RV32I-NEXT:    lhu a0, 12(a1)
 ; RV32I-NEXT:    li a1, 95
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __umodsi3
 ; RV32I-NEXT:    andi a1, s3, 63
 ; RV32I-NEXT:    andi a2, s2, 31
@@ -586,10 +581,9 @@ define <4 x i16> @dont_fold_urem_power_of_two(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    lhu s1, 16(a1)
 ; RV64I-NEXT:    lhu s2, 8(a1)
 ; RV64I-NEXT:    lhu s3, 0(a1)
-; RV64I-NEXT:    lhu a2, 24(a1)
 ; RV64I-NEXT:    mv s0, a0
+; RV64I-NEXT:    lhu a0, 24(a1)
 ; RV64I-NEXT:    li a1, 95
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __umoddi3
 ; RV64I-NEXT:    andi a1, s3, 63
 ; RV64I-NEXT:    andi a2, s2, 31
@@ -642,10 +636,9 @@ define <4 x i16> @dont_fold_urem_one(<4 x i16> %x) nounwind {
 ; RV32I-NEXT:    sw s3, 12(sp) # 4-byte Folded Spill
 ; RV32I-NEXT:    lhu s0, 12(a1)
 ; RV32I-NEXT:    lhu s1, 8(a1)
-; RV32I-NEXT:    lhu a2, 4(a1)
 ; RV32I-NEXT:    mv s2, a0
+; RV32I-NEXT:    lhu a0, 4(a1)
 ; RV32I-NEXT:    li a1, 654
-; RV32I-NEXT:    mv a0, a2
 ; RV32I-NEXT:    call __umodsi3
 ; RV32I-NEXT:    mv s3, a0
 ; RV32I-NEXT:    li a1, 23
@@ -708,10 +701,9 @@ define <4 x i16> @dont_fold_urem_one(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    sd s3, 8(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    lhu s0, 24(a1)
 ; RV64I-NEXT:    lhu s1, 16(a1)
-; RV64I-NEXT:    lhu a2, 8(a1)
 ; RV64I-NEXT:    mv s2, a0
+; RV64I-NEXT:    lhu a0, 8(a1)
 ; RV64I-NEXT:    li a1, 654
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __umoddi3
 ; RV64I-NEXT:    mv s3, a0
 ; RV64I-NEXT:    li a1, 23
@@ -797,11 +789,10 @@ define <4 x i64> @dont_fold_urem_i64(<4 x i64> %x) nounwind {
 ; RV32I-NEXT:    lw s3, 20(a1)
 ; RV32I-NEXT:    lw s4, 8(a1)
 ; RV32I-NEXT:    lw s5, 12(a1)
-; RV32I-NEXT:    lw a3, 0(a1)
-; RV32I-NEXT:    lw a1, 4(a1)
 ; RV32I-NEXT:    mv s6, a0
+; RV32I-NEXT:    lw a0, 0(a1)
+; RV32I-NEXT:    lw a1, 4(a1)
 ; RV32I-NEXT:    li a2, 1
-; RV32I-NEXT:    mv a0, a3
 ; RV32I-NEXT:    li a3, 0
 ; RV32I-NEXT:    call __umoddi3
 ; RV32I-NEXT:    mv s7, a0
@@ -866,11 +857,10 @@ define <4 x i64> @dont_fold_urem_i64(<4 x i64> %x) nounwind {
 ; RV32IM-NEXT:    lw s3, 20(a1)
 ; RV32IM-NEXT:    lw s4, 8(a1)
 ; RV32IM-NEXT:    lw s5, 12(a1)
-; RV32IM-NEXT:    lw a3, 0(a1)
-; RV32IM-NEXT:    lw a1, 4(a1)
 ; RV32IM-NEXT:    mv s6, a0
+; RV32IM-NEXT:    lw a0, 0(a1)
+; RV32IM-NEXT:    lw a1, 4(a1)
 ; RV32IM-NEXT:    li a2, 1
-; RV32IM-NEXT:    mv a0, a3
 ; RV32IM-NEXT:    li a3, 0
 ; RV32IM-NEXT:    call __umoddi3
 ; RV32IM-NEXT:    mv s7, a0
@@ -926,10 +916,9 @@ define <4 x i64> @dont_fold_urem_i64(<4 x i64> %x) nounwind {
 ; RV64I-NEXT:    sd s3, 8(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    ld s0, 24(a1)
 ; RV64I-NEXT:    ld s1, 16(a1)
-; RV64I-NEXT:    ld a2, 8(a1)
 ; RV64I-NEXT:    mv s2, a0
+; RV64I-NEXT:    ld a0, 8(a1)
 ; RV64I-NEXT:    li a1, 654
-; RV64I-NEXT:    mv a0, a2
 ; RV64I-NEXT:    call __umoddi3
 ; RV64I-NEXT:    mv s3, a0
 ; RV64I-NEXT:    li a1, 23
diff --git a/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll b/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
index b0d435368e92bd..8d6a5cd1c38b4c 100644
--- a/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
+++ b/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
@@ -560,9 +560,8 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV32I-NEXT:    sra a1, a3, a5
 ; RV32I-NEXT:    bltz a6, .LBB5_2
 ; RV32I-NEXT:  # %bb.1:
-; RV32I-NEXT:    srai a4, a4, 31
 ; RV32I-NEXT:    mv a0, a1
-; RV32I-NEXT:    mv a1, a4
+; RV32I-NEXT:    srai a1, a4, 31
 ; RV32I-NEXT:    j .LBB5_3
 ; RV32I-NEXT:  .LBB5_2:
 ; RV32I-NEXT:    lbu a4, 1(a0)
@@ -1094,9 +1093,8 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    sra a1, a3, a5
 ; RV64I-NEXT:    bltz a6, .LBB8_2
 ; RV64I-NEXT:  # %bb.1:
-; RV64I-NEXT:    sraiw a3, a4, 31
 ; RV64I-NEXT:    mv a0, a1
-; RV64I-NEXT:    mv a1, a3
+; RV64I-NEXT:    sraiw a1, a4, 31
 ; RV64I-NEXT:    j .LBB8_3
 ; RV64I-NEXT:  .LBB8_2:
 ; RV64I-NEXT:    lbu a4, 1(a0)
diff --git a/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll b/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
index a601256bc2afaa..4c2ab1230ee6e0 100644
--- a/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
+++ b/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
@@ -543,9 +543,8 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV32I-NEXT:    sra a1, a3, a5
 ; RV32I-NEXT:    bltz a6, .LBB5_2
 ; RV32I-NEXT:  # %bb.1:
-; RV32I-NEXT:    srai a4, a4, 31
 ; RV32I-NEXT:    mv a0, a1
-; RV32I-NEXT:    mv a1, a4
+; RV32I-NEXT:    srai a1, a4, 31
 ; RV32I-NEXT:    j .LBB5_3
 ; RV32I-NEXT:  .LBB5_2:
 ; RV32I-NEXT:    lbu a4, 1(a0)
@@ -1203,9 +1202,8 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    sra a1, a3, a5
 ; RV64I-NEXT:    bltz a6, .LBB8_2
 ; RV64I-NEXT:  # %bb.1:
-; RV64I-NEXT:    sraiw a3, a4, 31
 ; RV64I-NEXT:    mv a0, a1
-; RV64I-NEXT:    mv a1, a3
+; RV64I-NEXT:    sraiw a1, a4, 31
 ; RV64I-NEXT:    j .LBB8_3
 ; RV64I-NEXT:  .LBB8_2:
 ; RV64I-NEXT:    lbu a4, 1(a0)
diff --git a/llvm/test/CodeGen/SPARC/tailcall.ll b/llvm/test/CodeGen/SPARC/tailcall.ll
index 45612c51ee1338..6b1eb88bb92035 100644
--- a/llvm/test/CodeGen/SPARC/tailcall.ll
+++ b/llvm/test/CodeGen/SPARC/tailcall.ll
@@ -78,10 +78,9 @@ define void @caller_extern(ptr %src) optsize #0 {
 ; V8-LABEL: caller_extern:
 ; V8:       ! %bb.0: ! %entry
 ; V8-NEXT:    sethi %hi(dest), %o1
-; V8-NEXT:    add %o1, %lo(dest), %o1
-; V8-NEXT:    mov 7, %o2
 ; V8-NEXT:    mov %o0, %o3
-; V8-NEXT:    mov %o1, %o0
+; V8-NEXT:    add %o1, %lo(dest), %o0
+; V8-NEXT:    mov 7, %o2
 ; V8-NEXT:    mov %o3, %o1
 ; V8-NEXT:    mov %o7, %g1
 ; V8-NEXT:    call memcpy
@@ -92,10 +91,9 @@ define void @caller_extern(ptr %src) optsize #0 {
 ; V9-NEXT:    sethi %h44(dest), %o1
 ; V9-NEXT:    add %o1, %m44(dest), %o1
 ; V9-NEXT:    sllx %o1, 12, %o1
-; V9-NEXT:    add %o1, %l44(dest), %o1
-; V9-NEXT:    mov 7, %o2
 ; V9-NEXT:    mov %o0, %o3
-; V9-NEXT:    mov %o1, %o0
+; V9-NEXT:    add %o1, %l44(dest), %o0
+; V9-NEXT:    mov 7, %o2
 ; V9-NEXT:    mov %o3, %o1
 ; V9-NEXT:    mov %o7, %g1
 ; V9-NEXT:    call memcpy
diff --git a/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll b/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
index 4a109ee96a3d3e..2501eb552b149b 100644
--- a/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
+++ b/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
@@ -237,9 +237,8 @@ define <2 x double> @constrained_vector_frem_v2f64() #0 {
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    larl %r1, .LCPI6_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    ldr %f2, %f9
@@ -303,15 +302,13 @@ define <3 x float> @constrained_vector_frem_v3f32() #0 {
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmodf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI7_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmodf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI7_3
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f10, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmodf at PLT
 ; S390X-NEXT:    ler %f2, %f10
@@ -388,15 +385,13 @@ define void @constrained_vector_frem_v3f64(ptr %a) #0 {
 ; S390X-NEXT:    ld %f9, 8(%r2)
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    larl %r1, .LCPI8_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f9
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    larl %r1, .LCPI8_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    std %f0, 0(%r13)
@@ -480,21 +475,18 @@ define <4 x double> @constrained_vector_frem_v4f64() #0 {
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    larl %r1, .LCPI9_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    larl %r1, .LCPI9_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    larl %r1, .LCPI9_4
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f11, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmod at PLT
 ; S390X-NEXT:    ldr %f2, %f11
@@ -1263,9 +1255,8 @@ define <2 x double> @constrained_vector_pow_v2f64() #0 {
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, pow at PLT
 ; S390X-NEXT:    larl %r1, .LCPI31_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, pow at PLT
 ; S390X-NEXT:    ldr %f2, %f9
@@ -1331,15 +1322,13 @@ define <3 x float> @constrained_vector_pow_v3f32() #0 {
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, powf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI32_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, powf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI32_3
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f10, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, powf at PLT
 ; S390X-NEXT:    ler %f2, %f10
@@ -1514,21 +1503,18 @@ define <4 x double> @constrained_vector_pow_v4f64() #0 {
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, pow at PLT
 ; S390X-NEXT:    larl %r1, .LCPI34_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, pow at PLT
 ; S390X-NEXT:    larl %r1, .LCPI34_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, pow at PLT
 ; S390X-NEXT:    larl %r1, .LCPI34_4
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f11, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    brasl %r14, pow at PLT
 ; S390X-NEXT:    ldr %f2, %f11
@@ -1648,10 +1634,9 @@ define <2 x double> @constrained_vector_powi_v2f64() #0 {
 ; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI36_1
-; S390X-NEXT:    ld %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -1706,16 +1691,14 @@ define <3 x float> @constrained_vector_powi_v3f32() #0 {
 ; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powisf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI37_1
-; S390X-NEXT:    le %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powisf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI37_2
-; S390X-NEXT:    le %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powisf2 at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -1783,16 +1766,14 @@ define void @constrained_vector_powi_v3f64(ptr %a) #0 {
 ; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI38_1
-; S390X-NEXT:    ld %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI38_2
-; S390X-NEXT:    ld %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    std %f0, 16(%r13)
 ; S390X-NEXT:    std %f9, 8(%r13)
@@ -1864,22 +1845,19 @@ define <4 x double> @constrained_vector_powi_v4f64() #0 {
 ; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI39_1
-; S390X-NEXT:    ld %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI39_2
-; S390X-NEXT:    ld %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI39_3
-; S390X-NEXT:    ld %f1, 0(%r1)
-; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
+; S390X-NEXT:    lghi %r2, 3
 ; S390X-NEXT:    brasl %r14, __powidf2 at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -1987,9 +1965,8 @@ define <2 x double> @constrained_vector_sin_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI41_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sin at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -2040,14 +2017,12 @@ define <3 x float> @constrained_vector_sin_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sinf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI42_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sinf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI42_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sinf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -2189,19 +2164,16 @@ define <4 x double> @constrained_vector_sin_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI44_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI44_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI44_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, sin at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -2304,9 +2276,8 @@ define <2 x double> @constrained_vector_cos_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cos at PLT
 ; S390X-NEXT:    larl %r1, .LCPI46_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cos at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -2357,14 +2328,12 @@ define <3 x float> @constrained_vector_cos_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cosf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI47_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cosf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI47_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cosf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -2506,19 +2475,16 @@ define <4 x double> @constrained_vector_cos_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cos at PLT
 ; S390X-NEXT:    larl %r1, .LCPI49_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cos at PLT
 ; S390X-NEXT:    larl %r1, .LCPI49_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cos at PLT
 ; S390X-NEXT:    larl %r1, .LCPI49_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, cos at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -2621,9 +2587,8 @@ define <2 x double> @constrained_vector_exp_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp at PLT
 ; S390X-NEXT:    larl %r1, .LCPI51_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -2674,14 +2639,12 @@ define <3 x float> @constrained_vector_exp_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, expf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI52_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, expf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI52_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, expf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -2823,19 +2786,16 @@ define <4 x double> @constrained_vector_exp_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp at PLT
 ; S390X-NEXT:    larl %r1, .LCPI54_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp at PLT
 ; S390X-NEXT:    larl %r1, .LCPI54_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp at PLT
 ; S390X-NEXT:    larl %r1, .LCPI54_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -2938,9 +2898,8 @@ define <2 x double> @constrained_vector_exp2_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI56_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2 at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -2991,14 +2950,12 @@ define <3 x float> @constrained_vector_exp2_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2f at PLT
 ; S390X-NEXT:    larl %r1, .LCPI57_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2f at PLT
 ; S390X-NEXT:    larl %r1, .LCPI57_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2f at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -3140,19 +3097,16 @@ define <4 x double> @constrained_vector_exp2_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI59_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI59_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI59_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, exp2 at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -3255,9 +3209,8 @@ define <2 x double> @constrained_vector_log_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log at PLT
 ; S390X-NEXT:    larl %r1, .LCPI61_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -3308,14 +3261,12 @@ define <3 x float> @constrained_vector_log_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, logf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI62_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, logf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI62_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, logf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -3457,19 +3408,16 @@ define <4 x double> @constrained_vector_log_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log at PLT
 ; S390X-NEXT:    larl %r1, .LCPI64_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log at PLT
 ; S390X-NEXT:    larl %r1, .LCPI64_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log at PLT
 ; S390X-NEXT:    larl %r1, .LCPI64_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -3572,9 +3520,8 @@ define <2 x double> @constrained_vector_log10_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI66_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10 at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -3625,14 +3572,12 @@ define <3 x float> @constrained_vector_log10_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10f at PLT
 ; S390X-NEXT:    larl %r1, .LCPI67_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10f at PLT
 ; S390X-NEXT:    larl %r1, .LCPI67_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10f at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -3774,19 +3719,16 @@ define <4 x double> @constrained_vector_log10_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI69_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI69_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI69_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log10 at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -3889,9 +3831,8 @@ define <2 x double> @constrained_vector_log2_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI71_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2 at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -3942,14 +3883,12 @@ define <3 x float> @constrained_vector_log2_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2f at PLT
 ; S390X-NEXT:    larl %r1, .LCPI72_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2f at PLT
 ; S390X-NEXT:    larl %r1, .LCPI72_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2f at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -4091,19 +4030,16 @@ define <4 x double> @constrained_vector_log2_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI74_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI74_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2 at PLT
 ; S390X-NEXT:    larl %r1, .LCPI74_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, log2 at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -4352,9 +4288,8 @@ define <2 x double> @constrained_vector_nearbyint_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyint at PLT
 ; S390X-NEXT:    larl %r1, .LCPI81_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyint at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -4391,14 +4326,12 @@ define <3 x float> @constrained_vector_nearbyint_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyintf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI82_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyintf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI82_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyintf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -4502,19 +4435,16 @@ define <4 x double> @constrained_vector_nearbyint_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyint at PLT
 ; S390X-NEXT:    larl %r1, .LCPI84_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyint at PLT
 ; S390X-NEXT:    larl %r1, .LCPI84_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyint at PLT
 ; S390X-NEXT:    larl %r1, .LCPI84_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, nearbyint at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -4598,11 +4528,10 @@ define <2 x double> @constrained_vector_maxnum_v2f64() #0 {
 ; S390X-NEXT:    ld %f2, 0(%r1)
 ; S390X-NEXT:    brasl %r14, fmax at PLT
 ; S390X-NEXT:    larl %r1, .LCPI86_2
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f8, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI86_3
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmax at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -4662,11 +4591,10 @@ define <3 x float> @constrained_vector_maxnum_v3f32() #0 {
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, fmaxf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI87_2
-; S390X-NEXT:    le %f1, 0(%r1)
+; S390X-NEXT:    ler %f9, %f0
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI87_3
 ; S390X-NEXT:    le %f2, 0(%r1)
-; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmaxf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI87_4
 ; S390X-NEXT:    le %f2, 0(%r1)
@@ -4837,25 +4765,22 @@ define <4 x double> @constrained_vector_maxnum_v4f64() #0 {
 ; S390X-NEXT:    ld %f2, 0(%r1)
 ; S390X-NEXT:    brasl %r14, fmax at PLT
 ; S390X-NEXT:    larl %r1, .LCPI89_2
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f8, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI89_3
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmax at PLT
 ; S390X-NEXT:    larl %r1, .LCPI89_4
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f9, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI89_5
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmax at PLT
 ; S390X-NEXT:    larl %r1, .LCPI89_6
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f10, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI89_7
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmax at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -4972,11 +4897,10 @@ define <2 x double> @constrained_vector_minnum_v2f64() #0 {
 ; S390X-NEXT:    ld %f2, 0(%r1)
 ; S390X-NEXT:    brasl %r14, fmin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI91_2
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f8, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI91_3
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmin at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -5036,11 +4960,10 @@ define <3 x float> @constrained_vector_minnum_v3f32() #0 {
 ; S390X-NEXT:    ler %f2, %f8
 ; S390X-NEXT:    brasl %r14, fminf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI92_2
-; S390X-NEXT:    le %f1, 0(%r1)
+; S390X-NEXT:    ler %f9, %f0
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI92_3
 ; S390X-NEXT:    le %f2, 0(%r1)
-; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
 ; S390X-NEXT:    brasl %r14, fminf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI92_4
 ; S390X-NEXT:    le %f2, 0(%r1)
@@ -5215,25 +5138,22 @@ define <4 x double> @constrained_vector_minnum_v4f64() #0 {
 ; S390X-NEXT:    ld %f2, 0(%r1)
 ; S390X-NEXT:    brasl %r14, fmin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI94_2
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f8, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI94_3
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI94_4
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f9, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI94_5
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmin at PLT
 ; S390X-NEXT:    larl %r1, .LCPI94_6
-; S390X-NEXT:    ld %f1, 0(%r1)
+; S390X-NEXT:    ldr %f10, %f0
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    larl %r1, .LCPI94_7
 ; S390X-NEXT:    ld %f2, 0(%r1)
-; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
 ; S390X-NEXT:    brasl %r14, fmin at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
@@ -5585,9 +5505,8 @@ define <2 x double> @constrained_vector_ceil_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, ceil at PLT
 ; S390X-NEXT:    larl %r1, .LCPI104_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, ceil at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -5623,14 +5542,12 @@ define <3 x float> @constrained_vector_ceil_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, ceilf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI105_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, ceilf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI105_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, ceilf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -5755,9 +5672,8 @@ define <2 x double> @constrained_vector_floor_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, floor at PLT
 ; S390X-NEXT:    larl %r1, .LCPI108_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, floor at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -5793,14 +5709,12 @@ define <3 x float> @constrained_vector_floor_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, floorf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI109_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, floorf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI109_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, floorf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -5924,9 +5838,8 @@ define <2 x double> @constrained_vector_round_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, round at PLT
 ; S390X-NEXT:    larl %r1, .LCPI112_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, round at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -5962,14 +5875,12 @@ define <3 x float> @constrained_vector_round_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, roundf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI113_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, roundf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI113_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, roundf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -6094,9 +6005,8 @@ define <2 x double> @constrained_vector_trunc_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, trunc at PLT
 ; S390X-NEXT:    larl %r1, .LCPI116_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, trunc at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -6132,14 +6042,12 @@ define <3 x float> @constrained_vector_trunc_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, truncf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI117_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, truncf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI117_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, truncf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -6272,9 +6180,8 @@ define <2 x double> @constrained_vector_tan_v2f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tan at PLT
 ; S390X-NEXT:    larl %r1, .LCPI120_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tan at PLT
 ; S390X-NEXT:    ldr %f2, %f8
 ; S390X-NEXT:    ld %f8, 160(%r15) # 8-byte Folded Reload
@@ -6325,14 +6232,12 @@ define <3 x float> @constrained_vector_tan_v3f32() #0 {
 ; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tanf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI121_1
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f8, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tanf at PLT
 ; S390X-NEXT:    larl %r1, .LCPI121_2
-; S390X-NEXT:    le %f1, 0(%r1)
 ; S390X-NEXT:    ler %f9, %f0
-; S390X-NEXT:    ler %f0, %f1
+; S390X-NEXT:    le %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tanf at PLT
 ; S390X-NEXT:    ler %f2, %f9
 ; S390X-NEXT:    ler %f4, %f8
@@ -6474,19 +6379,16 @@ define <4 x double> @constrained_vector_tan_v4f64() #0 {
 ; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tan at PLT
 ; S390X-NEXT:    larl %r1, .LCPI123_1
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f8, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tan at PLT
 ; S390X-NEXT:    larl %r1, .LCPI123_2
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f9, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tan at PLT
 ; S390X-NEXT:    larl %r1, .LCPI123_3
-; S390X-NEXT:    ld %f1, 0(%r1)
 ; S390X-NEXT:    ldr %f10, %f0
-; S390X-NEXT:    ldr %f0, %f1
+; S390X-NEXT:    ld %f0, 0(%r1)
 ; S390X-NEXT:    brasl %r14, tan at PLT
 ; S390X-NEXT:    ldr %f2, %f10
 ; S390X-NEXT:    ldr %f4, %f9
diff --git a/llvm/test/CodeGen/Thumb2/mve-pred-ext.ll b/llvm/test/CodeGen/Thumb2/mve-pred-ext.ll
index 117469f3bd788b..1bf4bd4afe7a36 100644
--- a/llvm/test/CodeGen/Thumb2/mve-pred-ext.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-pred-ext.ll
@@ -942,8 +942,7 @@ define arm_aapcs_vfpcc <2 x double> @uitofp_v2i1_v2f64(<2 x i64> %src) {
 ; CHECK-NEXT:    vmov d9, r0, r1
 ; CHECK-NEXT:    rsbs r2, r2, #0
 ; CHECK-NEXT:    sbcs.w r2, r4, r3
-; CHECK-NEXT:    cset r2, lt
-; CHECK-NEXT:    mov r0, r2
+; CHECK-NEXT:    cset r0, lt
 ; CHECK-NEXT:    bl __aeabi_ui2d
 ; CHECK-NEXT:    vmov d8, r0, r1
 ; CHECK-NEXT:    vmov q0, q4
@@ -973,8 +972,7 @@ define arm_aapcs_vfpcc <2 x double> @sitofp_v2i1_v2f64(<2 x i64> %src) {
 ; CHECK-NEXT:    vmov d9, r0, r1
 ; CHECK-NEXT:    rsbs r2, r2, #0
 ; CHECK-NEXT:    sbcs.w r2, r4, r3
-; CHECK-NEXT:    csetm r2, lt
-; CHECK-NEXT:    mov r0, r2
+; CHECK-NEXT:    csetm r0, lt
 ; CHECK-NEXT:    bl __aeabi_i2d
 ; CHECK-NEXT:    vmov d8, r0, r1
 ; CHECK-NEXT:    vmov q0, q4
diff --git a/llvm/test/CodeGen/Thumb2/mve-vmull-splat.ll b/llvm/test/CodeGen/Thumb2/mve-vmull-splat.ll
index cebc0d9c0e172c..7fd538ecc6f5a7 100644
--- a/llvm/test/CodeGen/Thumb2/mve-vmull-splat.ll
+++ b/llvm/test/CodeGen/Thumb2/mve-vmull-splat.ll
@@ -194,7 +194,6 @@ define arm_aapcs_vfpcc <4 x i64> @sext32_0213_0ext(<8 x i32> %src1, i32 %src2) {
 ; CHECK-NEXT:    vpush {d8, d9}
 ; CHECK-NEXT:    vmov q4, q0
 ; CHECK-NEXT:    vmov q3[2], q3[0], r0, r0
-; CHECK-NEXT:    vmov.f32 s17, s4
 ; CHECK-NEXT:    vmov.f32 s0, s1
 ; CHECK-NEXT:    vmullb.s32 q2, q4, q3
 ; CHECK-NEXT:    vmov.f32 s2, s3
@@ -219,7 +218,6 @@ define arm_aapcs_vfpcc <4 x i64> @sext32_0ext_0213(<8 x i32> %src1, i32 %src2) {
 ; CHECK-NEXT:    vpush {d8, d9}
 ; CHECK-NEXT:    vmov q4, q0
 ; CHECK-NEXT:    vmov q3[2], q3[0], r0, r0
-; CHECK-NEXT:    vmov.f32 s17, s4
 ; CHECK-NEXT:    vmov.f32 s0, s1
 ; CHECK-NEXT:    vmullb.s32 q2, q3, q4
 ; CHECK-NEXT:    vmov.f32 s2, s3
@@ -480,7 +478,6 @@ define arm_aapcs_vfpcc <4 x i64> @zext32_0213_0ext(<8 x i32> %src1, i32 %src2) {
 ; CHECK-NEXT:    vpush {d8, d9}
 ; CHECK-NEXT:    vmov q4, q0
 ; CHECK-NEXT:    vmov q3[2], q3[0], r0, r0
-; CHECK-NEXT:    vmov.f32 s17, s4
 ; CHECK-NEXT:    vmov.f32 s0, s1
 ; CHECK-NEXT:    vmullb.u32 q2, q4, q3
 ; CHECK-NEXT:    vmov.f32 s2, s3
@@ -505,7 +502,6 @@ define arm_aapcs_vfpcc <4 x i64> @zext32_0ext_0213(<8 x i32> %src1, i32 %src2) {
 ; CHECK-NEXT:    vpush {d8, d9}
 ; CHECK-NEXT:    vmov q4, q0
 ; CHECK-NEXT:    vmov q3[2], q3[0], r0, r0
-; CHECK-NEXT:    vmov.f32 s17, s4
 ; CHECK-NEXT:    vmov.f32 s0, s1
 ; CHECK-NEXT:    vmullb.u32 q2, q3, q4
 ; CHECK-NEXT:    vmov.f32 s2, s3
diff --git a/llvm/test/CodeGen/X86/avx512-intrinsics.ll b/llvm/test/CodeGen/X86/avx512-intrinsics.ll
index b77c753107a6e1..eda085b0a5c4a2 100644
--- a/llvm/test/CodeGen/X86/avx512-intrinsics.ll
+++ b/llvm/test/CodeGen/X86/avx512-intrinsics.ll
@@ -1013,9 +1013,8 @@ define <16 x i16> @test_x86_vcvtps2ph_256(<16 x float> %a0, <16 x i16> %src, i16
 ; X64-NEXT:    kmovw %edi, %k1
 ; X64-NEXT:    vcvtps2ph $3, {sae}, %zmm0, %ymm2 {%k1} {z}
 ; X64-NEXT:    vcvtps2ph $4, {sae}, %zmm0, %ymm1 {%k1}
-; X64-NEXT:    vpaddw %ymm1, %ymm2, %ymm1
 ; X64-NEXT:    vcvtps2ph $2, %zmm0, (%rsi)
-; X64-NEXT:    vmovdqa %ymm1, %ymm0
+; X64-NEXT:    vpaddw %ymm1, %ymm2, %ymm0
 ; X64-NEXT:    retq
 ;
 ; X86-LABEL: test_x86_vcvtps2ph_256:
@@ -1024,9 +1023,8 @@ define <16 x i16> @test_x86_vcvtps2ph_256(<16 x float> %a0, <16 x i16> %src, i16
 ; X86-NEXT:    kmovw {{[0-9]+}}(%esp), %k1
 ; X86-NEXT:    vcvtps2ph $3, {sae}, %zmm0, %ymm2 {%k1} {z}
 ; X86-NEXT:    vcvtps2ph $4, {sae}, %zmm0, %ymm1 {%k1}
-; X86-NEXT:    vpaddw %ymm1, %ymm2, %ymm1
 ; X86-NEXT:    vcvtps2ph $2, %zmm0, (%eax)
-; X86-NEXT:    vmovdqa %ymm1, %ymm0
+; X86-NEXT:    vpaddw %ymm1, %ymm2, %ymm0
 ; X86-NEXT:    retl
   %res1 = call <16 x i16> @llvm.x86.avx512.mask.vcvtps2ph.512(<16 x float> %a0, i32 2, <16 x i16> zeroinitializer, i16 -1)
   %res2 = call <16 x i16> @llvm.x86.avx512.mask.vcvtps2ph.512(<16 x float> %a0, i32 11, <16 x i16> zeroinitializer, i16 %mask)
diff --git a/llvm/test/CodeGen/X86/matrix-multiply.ll b/llvm/test/CodeGen/X86/matrix-multiply.ll
index ef85cd146d65f9..522674017632d5 100644
--- a/llvm/test/CodeGen/X86/matrix-multiply.ll
+++ b/llvm/test/CodeGen/X86/matrix-multiply.ll
@@ -3294,7 +3294,8 @@ define <64 x double> @test_mul8x8_f64(<64 x double> %a0, <64 x double> %a1) noun
 ; SSE-NEXT:    movapd %xmm1, {{[-0-9]+}}(%r{{[sb]}}p) # 16-byte Spill
 ; SSE-NEXT:    movapd %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 16-byte Spill
 ; SSE-NEXT:    movq %rdi, %rax
-; SSE-NEXT:    movapd {{[0-9]+}}(%rsp), %xmm14
+; SSE-NEXT:    movapd %xmm1, %xmm9
+; SSE-NEXT:    movapd {{[0-9]+}}(%rsp), %xmm1
 ; SSE-NEXT:    movapd {{[0-9]+}}(%rsp), %xmm11
 ; SSE-NEXT:    movapd {{[0-9]+}}(%rsp), %xmm13
 ; SSE-NEXT:    movapd %xmm13, %xmm12
@@ -3303,7 +3304,6 @@ define <64 x double> @test_mul8x8_f64(<64 x double> %a0, <64 x double> %a1) noun
 ; SSE-NEXT:    mulpd %xmm12, %xmm10
 ; SSE-NEXT:    movapd %xmm2, %xmm8
 ; SSE-NEXT:    mulpd %xmm12, %xmm8
-; SSE-NEXT:    movapd %xmm1, %xmm9
 ; SSE-NEXT:    mulpd %xmm12, %xmm9
 ; SSE-NEXT:    mulpd %xmm0, %xmm12
 ; SSE-NEXT:    unpckhpd {{.*#+}} xmm13 = xmm13[1,1]
@@ -3321,7 +3321,6 @@ define <64 x double> @test_mul8x8_f64(<64 x double> %a0, <64 x double> %a1) noun
 ; SSE-NEXT:    addpd %xmm12, %xmm13
 ; SSE-NEXT:    movapd %xmm11, %xmm6
 ; SSE-NEXT:    unpcklpd {{.*#+}} xmm6 = xmm6[0],xmm11[0]
-; SSE-NEXT:    movapd %xmm14, %xmm1
 ; SSE-NEXT:    mulpd %xmm6, %xmm1
 ; SSE-NEXT:    addpd %xmm13, %xmm1
 ; SSE-NEXT:    movapd {{[0-9]+}}(%rsp), %xmm3
diff --git a/llvm/test/CodeGen/X86/vec_saddo.ll b/llvm/test/CodeGen/X86/vec_saddo.ll
index 460c5fe11f82a5..18ce7504927943 100644
--- a/llvm/test/CodeGen/X86/vec_saddo.ll
+++ b/llvm/test/CodeGen/X86/vec_saddo.ll
@@ -593,7 +593,8 @@ define <16 x i32> @saddo_v16i8(<16 x i8> %a0, <16 x i8> %a1, ptr %p2) nounwind {
 ; SSE41-NEXT:    pcmpeqb %xmm0, %xmm2
 ; SSE41-NEXT:    pcmpeqd %xmm3, %xmm3
 ; SSE41-NEXT:    pxor %xmm2, %xmm3
-; SSE41-NEXT:    pmovsxbd %xmm3, %xmm4
+; SSE41-NEXT:    movdqa %xmm0, (%rdi)
+; SSE41-NEXT:    pmovsxbd %xmm3, %xmm0
 ; SSE41-NEXT:    pshufd {{.*#+}} xmm1 = xmm3[1,1,1,1]
 ; SSE41-NEXT:    pmovzxbd {{.*#+}} xmm1 = xmm1[0],zero,zero,zero,xmm1[1],zero,zero,zero,xmm1[2],zero,zero,zero,xmm1[3],zero,zero,zero
 ; SSE41-NEXT:    pslld $31, %xmm1
@@ -606,8 +607,6 @@ define <16 x i32> @saddo_v16i8(<16 x i8> %a0, <16 x i8> %a1, ptr %p2) nounwind {
 ; SSE41-NEXT:    pmovzxbd {{.*#+}} xmm3 = xmm3[0],zero,zero,zero,xmm3[1],zero,zero,zero,xmm3[2],zero,zero,zero,xmm3[3],zero,zero,zero
 ; SSE41-NEXT:    pslld $31, %xmm3
 ; SSE41-NEXT:    psrad $31, %xmm3
-; SSE41-NEXT:    movdqa %xmm0, (%rdi)
-; SSE41-NEXT:    movdqa %xmm4, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX1-LABEL: saddo_v16i8:
@@ -769,9 +768,8 @@ define <2 x i32> @saddo_v2i64(<2 x i64> %a0, <2 x i64> %a1, ptr %p2) nounwind {
 ; SSE2-NEXT:    pxor %xmm2, %xmm2
 ; SSE2-NEXT:    pcmpgtd %xmm1, %xmm2
 ; SSE2-NEXT:    pxor %xmm3, %xmm2
-; SSE2-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
 ; SSE2-NEXT:    movdqa %xmm0, (%rdi)
-; SSE2-NEXT:    movdqa %xmm1, %xmm0
+; SSE2-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
 ; SSE2-NEXT:    retq
 ;
 ; SSSE3-LABEL: saddo_v2i64:
@@ -792,9 +790,8 @@ define <2 x i32> @saddo_v2i64(<2 x i64> %a0, <2 x i64> %a1, ptr %p2) nounwind {
 ; SSSE3-NEXT:    pxor %xmm2, %xmm2
 ; SSSE3-NEXT:    pcmpgtd %xmm1, %xmm2
 ; SSSE3-NEXT:    pxor %xmm3, %xmm2
-; SSSE3-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
 ; SSSE3-NEXT:    movdqa %xmm0, (%rdi)
-; SSSE3-NEXT:    movdqa %xmm1, %xmm0
+; SSSE3-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
 ; SSSE3-NEXT:    retq
 ;
 ; SSE41-LABEL: saddo_v2i64:
@@ -815,9 +812,8 @@ define <2 x i32> @saddo_v2i64(<2 x i64> %a0, <2 x i64> %a1, ptr %p2) nounwind {
 ; SSE41-NEXT:    pxor %xmm2, %xmm2
 ; SSE41-NEXT:    pcmpgtd %xmm1, %xmm2
 ; SSE41-NEXT:    pxor %xmm3, %xmm2
-; SSE41-NEXT:    pshufd {{.*#+}} xmm1 = xmm2[0,2,2,3]
 ; SSE41-NEXT:    movdqa %xmm0, (%rdi)
-; SSE41-NEXT:    movdqa %xmm1, %xmm0
+; SSE41-NEXT:    pshufd {{.*#+}} xmm0 = xmm2[0,2,2,3]
 ; SSE41-NEXT:    retq
 ;
 ; AVX-LABEL: saddo_v2i64:
diff --git a/llvm/test/CodeGen/X86/vec_ssubo.ll b/llvm/test/CodeGen/X86/vec_ssubo.ll
index d06993da6365d8..1688bc31f7e53f 100644
--- a/llvm/test/CodeGen/X86/vec_ssubo.ll
+++ b/llvm/test/CodeGen/X86/vec_ssubo.ll
@@ -598,7 +598,8 @@ define <16 x i32> @ssubo_v16i8(<16 x i8> %a0, <16 x i8> %a1, ptr %p2) nounwind {
 ; SSE41-NEXT:    pcmpeqb %xmm0, %xmm2
 ; SSE41-NEXT:    pcmpeqd %xmm3, %xmm3
 ; SSE41-NEXT:    pxor %xmm2, %xmm3
-; SSE41-NEXT:    pmovsxbd %xmm3, %xmm4
+; SSE41-NEXT:    movdqa %xmm0, (%rdi)
+; SSE41-NEXT:    pmovsxbd %xmm3, %xmm0
 ; SSE41-NEXT:    pshufd {{.*#+}} xmm1 = xmm3[1,1,1,1]
 ; SSE41-NEXT:    pmovzxbd {{.*#+}} xmm1 = xmm1[0],zero,zero,zero,xmm1[1],zero,zero,zero,xmm1[2],zero,zero,zero,xmm1[3],zero,zero,zero
 ; SSE41-NEXT:    pslld $31, %xmm1
@@ -611,8 +612,6 @@ define <16 x i32> @ssubo_v16i8(<16 x i8> %a0, <16 x i8> %a1, ptr %p2) nounwind {
 ; SSE41-NEXT:    pmovzxbd {{.*#+}} xmm3 = xmm3[0],zero,zero,zero,xmm3[1],zero,zero,zero,xmm3[2],zero,zero,zero,xmm3[3],zero,zero,zero
 ; SSE41-NEXT:    pslld $31, %xmm3
 ; SSE41-NEXT:    psrad $31, %xmm3
-; SSE41-NEXT:    movdqa %xmm0, (%rdi)
-; SSE41-NEXT:    movdqa %xmm4, %xmm0
 ; SSE41-NEXT:    retq
 ;
 ; AVX1-LABEL: ssubo_v16i8:
diff --git a/llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll b/llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll
index 81ce14132c8799..0e51f030e1bc51 100644
--- a/llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll
+++ b/llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll
@@ -740,13 +740,12 @@ define <16 x i64> @bit_reversal_permutation(<16 x i64> %a0) nounwind {
 ; X64-AVX1-NEXT:    vperm2f128 {{.*#+}} ymm5 = ymm2[2,3],ymm3[2,3]
 ; X64-AVX1-NEXT:    vperm2f128 {{.*#+}} ymm6 = ymm0[2,3],ymm1[2,3]
 ; X64-AVX1-NEXT:    vunpcklpd {{.*#+}} ymm4 = ymm6[0],ymm5[0],ymm6[2],ymm5[2]
-; X64-AVX1-NEXT:    vunpckhpd {{.*#+}} ymm5 = ymm6[1],ymm5[1],ymm6[3],ymm5[3]
 ; X64-AVX1-NEXT:    vinsertf128 $1, %xmm3, %ymm2, %ymm2
+; X64-AVX1-NEXT:    vunpckhpd {{.*#+}} ymm3 = ymm6[1],ymm5[1],ymm6[3],ymm5[3]
 ; X64-AVX1-NEXT:    vinsertf128 $1, %xmm1, %ymm0, %ymm1
 ; X64-AVX1-NEXT:    vunpcklpd {{.*#+}} ymm0 = ymm1[0],ymm2[0],ymm1[2],ymm2[2]
 ; X64-AVX1-NEXT:    vunpckhpd {{.*#+}} ymm2 = ymm1[1],ymm2[1],ymm1[3],ymm2[3]
 ; X64-AVX1-NEXT:    vmovaps %ymm4, %ymm1
-; X64-AVX1-NEXT:    vmovaps %ymm5, %ymm3
 ; X64-AVX1-NEXT:    retq
 ;
 ; X64-AVX2-LABEL: bit_reversal_permutation:
diff --git a/llvm/test/CodeGen/X86/xmulo.ll b/llvm/test/CodeGen/X86/xmulo.ll
index 2169b39b9dfa05..75f54d0484d93c 100644
--- a/llvm/test/CodeGen/X86/xmulo.ll
+++ b/llvm/test/CodeGen/X86/xmulo.ll
@@ -63,9 +63,8 @@ define zeroext i1 @smuloi8(i8 %v1, i8 %v2, ptr %res) {
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    # kill: def $al killed $al killed $eax
 ; SDAG-NEXT:    imulb %sil
-; SDAG-NEXT:    seto %cl
 ; SDAG-NEXT:    movb %al, (%rdx)
-; SDAG-NEXT:    movl %ecx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: smuloi8:
@@ -83,9 +82,8 @@ define zeroext i1 @smuloi8(i8 %v1, i8 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    imulb %dl
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movb %al, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: smuloi8:
@@ -93,9 +91,8 @@ define zeroext i1 @smuloi8(i8 %v1, i8 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %edx
 ; WIN32-NEXT:    movzbl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    imulb {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movb %al, (%edx)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    retl
   %t = call {i8, i1} @llvm.smul.with.overflow.i8(i8 %v1, i8 %v2)
   %val = extractvalue {i8, i1} %t, 0
@@ -290,9 +287,8 @@ define zeroext i1 @umuloi8(i8 %v1, i8 %v2, ptr %res) {
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    # kill: def $al killed $al killed $eax
 ; SDAG-NEXT:    mulb %sil
-; SDAG-NEXT:    seto %cl
 ; SDAG-NEXT:    movb %al, (%rdx)
-; SDAG-NEXT:    movl %ecx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi8:
@@ -310,9 +306,8 @@ define zeroext i1 @umuloi8(i8 %v1, i8 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    mulb %dl
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movb %al, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi8:
@@ -320,9 +315,8 @@ define zeroext i1 @umuloi8(i8 %v1, i8 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %edx
 ; WIN32-NEXT:    movzbl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    mulb {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movb %al, (%edx)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    retl
   %t = call {i8, i1} @llvm.umul.with.overflow.i8(i8 %v1, i8 %v2)
   %val = extractvalue {i8, i1} %t, 0
@@ -338,9 +332,8 @@ define zeroext i1 @umuloi16(i16 %v1, i16 %v2, ptr %res) {
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    # kill: def $ax killed $ax killed $eax
 ; SDAG-NEXT:    mulw %si
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movw %ax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi16:
@@ -359,9 +352,8 @@ define zeroext i1 @umuloi16(i16 %v1, i16 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    mulw %dx
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movw %ax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi16:
@@ -370,9 +362,8 @@ define zeroext i1 @umuloi16(i16 %v1, i16 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %esi
 ; WIN32-NEXT:    movzwl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    mulw {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movw %ax, (%esi)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    popl %esi
 ; WIN32-NEXT:    retl
   %t = call {i16, i1} @llvm.umul.with.overflow.i16(i16 %v1, i16 %v2)
@@ -388,9 +379,8 @@ define zeroext i1 @umuloi32(i32 %v1, i32 %v2, ptr %res) {
 ; SDAG-NEXT:    movq %rdx, %rcx
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    mull %esi
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movl %eax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi32:
@@ -408,9 +398,8 @@ define zeroext i1 @umuloi32(i32 %v1, i32 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    mull %edx
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movl %eax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi32:
@@ -419,9 +408,8 @@ define zeroext i1 @umuloi32(i32 %v1, i32 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %esi
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    mull {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movl %eax, (%esi)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    popl %esi
 ; WIN32-NEXT:    retl
   %t = call {i32, i1} @llvm.umul.with.overflow.i32(i32 %v1, i32 %v2)
@@ -437,9 +425,8 @@ define zeroext i1 @umuloi64(i64 %v1, i64 %v2, ptr %res) {
 ; SDAG-NEXT:    movq %rdx, %rcx
 ; SDAG-NEXT:    movq %rdi, %rax
 ; SDAG-NEXT:    mulq %rsi
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movq %rax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi64:
@@ -457,9 +444,8 @@ define zeroext i1 @umuloi64(i64 %v1, i64 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movq %rcx, %rax
 ; WIN64-NEXT:    mulq %rdx
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movq %rax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi64:
@@ -1399,9 +1385,8 @@ define zeroext i1 @smuloi8_load(ptr %ptr1, i8 %v2, ptr %res) {
 ; SDAG-NEXT:    movl %esi, %eax
 ; SDAG-NEXT:    # kill: def $al killed $al killed $eax
 ; SDAG-NEXT:    imulb (%rdi)
-; SDAG-NEXT:    seto %cl
 ; SDAG-NEXT:    movb %al, (%rdx)
-; SDAG-NEXT:    movl %ecx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: smuloi8_load:
@@ -1418,9 +1403,8 @@ define zeroext i1 @smuloi8_load(ptr %ptr1, i8 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %edx, %eax
 ; WIN64-NEXT:    imulb (%rcx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movb %al, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: smuloi8_load:
@@ -1429,9 +1413,8 @@ define zeroext i1 @smuloi8_load(ptr %ptr1, i8 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movzbl (%eax), %eax
 ; WIN32-NEXT:    imulb {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movb %al, (%edx)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    retl
   %v1 = load i8, ptr %ptr1
   %t = call {i8, i1} @llvm.smul.with.overflow.i8(i8 %v1, i8 %v2)
@@ -1447,9 +1430,8 @@ define zeroext i1 @smuloi8_load2(i8 %v1, ptr %ptr2, ptr %res) {
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    # kill: def $al killed $al killed $eax
 ; SDAG-NEXT:    imulb (%rsi)
-; SDAG-NEXT:    seto %cl
 ; SDAG-NEXT:    movb %al, (%rdx)
-; SDAG-NEXT:    movl %ecx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: smuloi8_load2:
@@ -1467,9 +1449,8 @@ define zeroext i1 @smuloi8_load2(i8 %v1, ptr %ptr2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    imulb (%rdx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movb %al, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: smuloi8_load2:
@@ -1478,9 +1459,8 @@ define zeroext i1 @smuloi8_load2(i8 %v1, ptr %ptr2, ptr %res) {
 ; WIN32-NEXT:    movzbl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %ecx
 ; WIN32-NEXT:    imulb (%ecx)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movb %al, (%edx)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    retl
   %v2 = load i8, ptr %ptr2
   %t = call {i8, i1} @llvm.smul.with.overflow.i8(i8 %v1, i8 %v2)
@@ -1869,9 +1849,8 @@ define zeroext i1 @umuloi8_load(ptr %ptr1, i8 %v2, ptr %res) {
 ; SDAG-NEXT:    movl %esi, %eax
 ; SDAG-NEXT:    # kill: def $al killed $al killed $eax
 ; SDAG-NEXT:    mulb (%rdi)
-; SDAG-NEXT:    seto %cl
 ; SDAG-NEXT:    movb %al, (%rdx)
-; SDAG-NEXT:    movl %ecx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi8_load:
@@ -1888,9 +1867,8 @@ define zeroext i1 @umuloi8_load(ptr %ptr1, i8 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %edx, %eax
 ; WIN64-NEXT:    mulb (%rcx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movb %al, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi8_load:
@@ -1899,9 +1877,8 @@ define zeroext i1 @umuloi8_load(ptr %ptr1, i8 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movzbl (%eax), %eax
 ; WIN32-NEXT:    mulb {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movb %al, (%edx)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    retl
   %v1 = load i8, ptr %ptr1
   %t = call {i8, i1} @llvm.umul.with.overflow.i8(i8 %v1, i8 %v2)
@@ -1917,9 +1894,8 @@ define zeroext i1 @umuloi8_load2(i8 %v1, ptr %ptr2, ptr %res) {
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    # kill: def $al killed $al killed $eax
 ; SDAG-NEXT:    mulb (%rsi)
-; SDAG-NEXT:    seto %cl
 ; SDAG-NEXT:    movb %al, (%rdx)
-; SDAG-NEXT:    movl %ecx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi8_load2:
@@ -1937,9 +1913,8 @@ define zeroext i1 @umuloi8_load2(i8 %v1, ptr %ptr2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    mulb (%rdx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movb %al, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi8_load2:
@@ -1948,9 +1923,8 @@ define zeroext i1 @umuloi8_load2(i8 %v1, ptr %ptr2, ptr %res) {
 ; WIN32-NEXT:    movzbl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %ecx
 ; WIN32-NEXT:    mulb (%ecx)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movb %al, (%edx)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    retl
   %v2 = load i8, ptr %ptr2
   %t = call {i8, i1} @llvm.umul.with.overflow.i8(i8 %v1, i8 %v2)
@@ -1967,9 +1941,8 @@ define zeroext i1 @umuloi16_load(ptr %ptr1, i16 %v2, ptr %res) {
 ; SDAG-NEXT:    movl %esi, %eax
 ; SDAG-NEXT:    # kill: def $ax killed $ax killed $eax
 ; SDAG-NEXT:    mulw (%rdi)
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movw %ax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi16_load:
@@ -1987,9 +1960,8 @@ define zeroext i1 @umuloi16_load(ptr %ptr1, i16 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %edx, %eax
 ; WIN64-NEXT:    mulw (%rcx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movw %ax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi16_load:
@@ -1999,9 +1971,8 @@ define zeroext i1 @umuloi16_load(ptr %ptr1, i16 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movzwl (%eax), %eax
 ; WIN32-NEXT:    mulw {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movw %ax, (%esi)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    popl %esi
 ; WIN32-NEXT:    retl
   %v1 = load i16, ptr %ptr1
@@ -2019,9 +1990,8 @@ define zeroext i1 @umuloi16_load2(i16 %v1, ptr %ptr2, ptr %res) {
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    # kill: def $ax killed $ax killed $eax
 ; SDAG-NEXT:    mulw (%rsi)
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movw %ax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi16_load2:
@@ -2040,9 +2010,8 @@ define zeroext i1 @umuloi16_load2(i16 %v1, ptr %ptr2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    mulw (%rdx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movw %ax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi16_load2:
@@ -2052,9 +2021,8 @@ define zeroext i1 @umuloi16_load2(i16 %v1, ptr %ptr2, ptr %res) {
 ; WIN32-NEXT:    movzwl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %ecx
 ; WIN32-NEXT:    mulw (%ecx)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movw %ax, (%esi)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    popl %esi
 ; WIN32-NEXT:    retl
   %v2 = load i16, ptr %ptr2
@@ -2071,9 +2039,8 @@ define zeroext i1 @umuloi32_load(ptr %ptr1, i32 %v2, ptr %res) {
 ; SDAG-NEXT:    movq %rdx, %rcx
 ; SDAG-NEXT:    movl %esi, %eax
 ; SDAG-NEXT:    mull (%rdi)
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movl %eax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi32_load:
@@ -2091,9 +2058,8 @@ define zeroext i1 @umuloi32_load(ptr %ptr1, i32 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %edx, %eax
 ; WIN64-NEXT:    mull (%rcx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movl %eax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi32_load:
@@ -2103,9 +2069,8 @@ define zeroext i1 @umuloi32_load(ptr %ptr1, i32 %v2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movl (%eax), %eax
 ; WIN32-NEXT:    mull {{[0-9]+}}(%esp)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movl %eax, (%esi)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    popl %esi
 ; WIN32-NEXT:    retl
   %v1 = load i32, ptr %ptr1
@@ -2122,9 +2087,8 @@ define zeroext i1 @umuloi32_load2(i32 %v1, ptr %ptr2, ptr %res) {
 ; SDAG-NEXT:    movq %rdx, %rcx
 ; SDAG-NEXT:    movl %edi, %eax
 ; SDAG-NEXT:    mull (%rsi)
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movl %eax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi32_load2:
@@ -2142,9 +2106,8 @@ define zeroext i1 @umuloi32_load2(i32 %v1, ptr %ptr2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movl %ecx, %eax
 ; WIN64-NEXT:    mull (%rdx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movl %eax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi32_load2:
@@ -2154,9 +2117,8 @@ define zeroext i1 @umuloi32_load2(i32 %v1, ptr %ptr2, ptr %res) {
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %eax
 ; WIN32-NEXT:    movl {{[0-9]+}}(%esp), %ecx
 ; WIN32-NEXT:    mull (%ecx)
-; WIN32-NEXT:    seto %cl
 ; WIN32-NEXT:    movl %eax, (%esi)
-; WIN32-NEXT:    movl %ecx, %eax
+; WIN32-NEXT:    seto %al
 ; WIN32-NEXT:    popl %esi
 ; WIN32-NEXT:    retl
   %v2 = load i32, ptr %ptr2
@@ -2173,9 +2135,8 @@ define zeroext i1 @umuloi64_load(ptr %ptr1, i64 %v2, ptr %res) {
 ; SDAG-NEXT:    movq %rdx, %rcx
 ; SDAG-NEXT:    movq %rsi, %rax
 ; SDAG-NEXT:    mulq (%rdi)
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movq %rax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi64_load:
@@ -2193,9 +2154,8 @@ define zeroext i1 @umuloi64_load(ptr %ptr1, i64 %v2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movq %rdx, %rax
 ; WIN64-NEXT:    mulq (%rcx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movq %rax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi64_load:
@@ -2250,9 +2210,8 @@ define zeroext i1 @umuloi64_load2(i64 %v1, ptr %ptr2, ptr %res) {
 ; SDAG-NEXT:    movq %rdx, %rcx
 ; SDAG-NEXT:    movq %rdi, %rax
 ; SDAG-NEXT:    mulq (%rsi)
-; SDAG-NEXT:    seto %dl
 ; SDAG-NEXT:    movq %rax, (%rcx)
-; SDAG-NEXT:    movl %edx, %eax
+; SDAG-NEXT:    seto %al
 ; SDAG-NEXT:    retq
 ;
 ; FAST-LABEL: umuloi64_load2:
@@ -2270,9 +2229,8 @@ define zeroext i1 @umuloi64_load2(i64 %v1, ptr %ptr2, ptr %res) {
 ; WIN64:       # %bb.0:
 ; WIN64-NEXT:    movq %rcx, %rax
 ; WIN64-NEXT:    mulq (%rdx)
-; WIN64-NEXT:    seto %cl
 ; WIN64-NEXT:    movq %rax, (%r8)
-; WIN64-NEXT:    movl %ecx, %eax
+; WIN64-NEXT:    seto %al
 ; WIN64-NEXT:    retq
 ;
 ; WIN32-LABEL: umuloi64_load2:



More information about the llvm-commits mailing list