[llvm] [RISCV] Switch to sign-extended loads if possible in RISCVOptWInstrs (PR #144703)

Alex Bradbury via llvm-commits llvm-commits at lists.llvm.org
Wed Jun 18 06:47:28 PDT 2025


https://github.com/asb created https://github.com/llvm/llvm-project/pull/144703

This is currently implemented as part of RISCVOptWInstrs in order to reuse hasAllNbitUsers. However a new home or further refactoring will be needed (see the end of this note).

Why prefer sign-extended loads?
* Sign-extending loads are more compressible. There is no compressed LWU, and LBU and LHU are only available in Zcb.
* Helps to minimise the diff vs RV32 (e.g. LWU vs LW)
* Helps to minimise distracting diffs vs GCC. I see this come up frequently when comparing GCC code and in these cases it's a red herring.

Issues or open questions with this patch as stands:
* Doing something at the MI level makes sense as a last resort. I wonder if for some of the cases we could be producing a sign-extended load earlier on.
* RISCVOptWInstrs is a slightly awkward home. It's currently not run for RV32, which means the LBU/LHU changes can actually add new diffs vs RV32. Potentially just this one transformation could be done for RV32.
* Do we want to perform additional load narrowing? With the existing code, an LD will be narrowed to LW if only the lower bits are needed. The example that made look at extending load normalisation in the first place was an LWU that immediately has a small mask applied, which could be narrowed to LB. It's not clear this has any real benefit.

This patch changes 95k instructions across a compile of llvm-test-suite (including SPEC 2017), and all tests complete successfully afterwards.

>From de07a26bdcdd8a9699482eadf2f15676a6e117ab Mon Sep 17 00:00:00 2001
From: Alex Bradbury <asb at igalia.com>
Date: Wed, 18 Jun 2025 13:33:58 +0100
Subject: [PATCH] [RISCV] Switch to sign-extended loads if possible in MI pass

This is currently implemented as part of RISCVOptWInstrs in order to
reuse hasAllNbitUsers. However a new home or further refactoring will be
needed (see the end of this note).

Why prefer sign-extended loads?
* Sign-extending loads are more compressible. There is no compressed
  LWU, and LBU and LHU are only available in Zcb.
* Helps to minimise the diff vs RV32 (e.g. LWU vs LW)
* Helps to minimise distracting diffs vs GCC. I see this come up
  frequently when comparing GCC code and in these cases it's a red
  herring.

Issues or open questions with this patch as stands:
* Doing something at the MI level makes sense as a last resort. I wonder
  if for some of the cases we could be producing a sign-extended load
  earlier on.
* RISCVOptWInstrs is a slightly awkward home. It's currently not run for
  RV32, which means the LBU/LHU changes can actually add new diffs vs
  RV32. Potentially just this one transformation could be done for RV32.
* Do we want to perform additional load narrowing? With the existing
  code, an LD will be narrowed to LW if only the lower bits are needed.
  The example that made look at extending load normalisation in the
  first place was an LWU that immediately has a small mask applied, which
  could be narrowed to LB. It's not clear this has any real benefit.

This patch changes 95k instructions across a compile of llvm-test-suite
(including SPEC 2017), and all tests complete successfully afterwards.
---
 llvm/lib/Target/RISCV/RISCVOptWInstrs.cpp     |   45 +
 .../RISCV/GlobalISel/double-convert.ll        |   16 +-
 .../CodeGen/RISCV/GlobalISel/float-convert.ll |   16 +-
 llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll |    2 +-
 .../test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll |    6 +-
 ...lar-shift-by-byte-multiple-legalization.ll |  192 ++--
 llvm/test/CodeGen/RISCV/atomic-signext.ll     |    4 +-
 .../CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll |    2 +-
 llvm/test/CodeGen/RISCV/bf16-promote.ll       |   45 +-
 llvm/test/CodeGen/RISCV/bfloat-convert.ll     |    4 +-
 llvm/test/CodeGen/RISCV/bfloat.ll             |    8 +-
 .../CodeGen/RISCV/ctz_zero_return_test.ll     |   16 +-
 .../CodeGen/RISCV/double-convert-strict.ll    |   18 +-
 llvm/test/CodeGen/RISCV/double-convert.ll     |   18 +-
 .../CodeGen/RISCV/float-convert-strict.ll     |   32 +-
 llvm/test/CodeGen/RISCV/float-convert.ll      |   32 +-
 llvm/test/CodeGen/RISCV/fold-mem-offset.ll    |  197 ++--
 llvm/test/CodeGen/RISCV/half-arith.ll         |    2 +-
 .../test/CodeGen/RISCV/half-convert-strict.ll |   42 +-
 llvm/test/CodeGen/RISCV/half-convert.ll       |   60 +-
 .../CodeGen/RISCV/hoist-global-addr-base.ll   |   22 +-
 .../RISCV/local-stack-slot-allocation.ll      |   10 +-
 llvm/test/CodeGen/RISCV/mem64.ll              |    6 +-
 llvm/test/CodeGen/RISCV/memcmp-optsize.ll     |   16 +-
 llvm/test/CodeGen/RISCV/memcmp.ll             |    8 +-
 llvm/test/CodeGen/RISCV/memcpy-inline.ll      |  188 +--
 llvm/test/CodeGen/RISCV/memcpy.ll             |   64 +-
 llvm/test/CodeGen/RISCV/memmove.ll            |   70 +-
 llvm/test/CodeGen/RISCV/nontemporal.ll        |  320 +++---
 llvm/test/CodeGen/RISCV/prefer-w-inst.mir     |    4 +-
 llvm/test/CodeGen/RISCV/rv64zbb.ll            |    2 +-
 llvm/test/CodeGen/RISCV/rv64zbkb.ll           |    2 +-
 llvm/test/CodeGen/RISCV/rvv/expandload.ll     | 1024 ++++++++---------
 .../rvv/fixed-vector-i8-index-cornercase.ll   |    6 +-
 .../rvv/fixed-vectors-extract-subvector.ll    |   48 +-
 .../RISCV/rvv/fixed-vectors-int-buildvec.ll   |  194 ++--
 .../RISCV/rvv/fixed-vectors-int-vrgather.ll   |   47 +-
 .../rvv/fixed-vectors-mask-load-store.ll      |   23 -
 .../RISCV/rvv/fixed-vectors-masked-gather.ll  |  174 +--
 .../RISCV/rvv/fixed-vectors-reduction-int.ll  |   25 +-
 .../fixed-vectors-strided-load-store-asm.ll   |   10 +-
 .../RISCV/rvv/fixed-vectors-strided-vpload.ll |    9 +-
 .../RISCV/rvv/fixed-vectors-unaligned.ll      |    8 +-
 .../CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll |   46 +-
 .../RISCV/rvv/fixed-vectors-vwmulsu.ll        |   46 +-
 .../CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll |   14 -
 .../CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll |   52 +-
 llvm/test/CodeGen/RISCV/rvv/memcpy-inline.ll  |   26 +-
 .../RISCV/rvv/stores-of-loads-merging.ll      |   12 +-
 llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll |   21 +-
 .../CodeGen/RISCV/srem-seteq-illegal-types.ll |    6 +-
 .../CodeGen/RISCV/stack-clash-prologue.ll     |    2 +-
 .../CodeGen/RISCV/unaligned-load-store.ll     |   26 +-
 .../CodeGen/RISCV/urem-seteq-illegal-types.ll |    2 +-
 llvm/test/CodeGen/RISCV/urem-vector-lkk.ll    |   12 +-
 ...lar-shift-by-byte-multiple-legalization.ll |  264 ++---
 .../RISCV/wide-scalar-shift-legalization.ll   |  144 +--
 .../CodeGen/RISCV/zdinx-boundary-check.ll     |    6 +-
 58 files changed, 1917 insertions(+), 1799 deletions(-)

diff --git a/llvm/lib/Target/RISCV/RISCVOptWInstrs.cpp b/llvm/lib/Target/RISCV/RISCVOptWInstrs.cpp
index ed61236415ccf..a141bae55b70f 100644
--- a/llvm/lib/Target/RISCV/RISCVOptWInstrs.cpp
+++ b/llvm/lib/Target/RISCV/RISCVOptWInstrs.cpp
@@ -71,6 +71,8 @@ class RISCVOptWInstrs : public MachineFunctionPass {
                       const RISCVSubtarget &ST, MachineRegisterInfo &MRI);
   bool appendWSuffixes(MachineFunction &MF, const RISCVInstrInfo &TII,
                        const RISCVSubtarget &ST, MachineRegisterInfo &MRI);
+  bool convertZExtLoads(MachineFunction &MF, const RISCVInstrInfo &TII,
+                       const RISCVSubtarget &ST, MachineRegisterInfo &MRI);
 
   void getAnalysisUsage(AnalysisUsage &AU) const override {
     AU.setPreservesCFG();
@@ -788,6 +790,47 @@ bool RISCVOptWInstrs::appendWSuffixes(MachineFunction &MF,
   return MadeChange;
 }
 
+bool RISCVOptWInstrs::convertZExtLoads(MachineFunction &MF,
+                                      const RISCVInstrInfo &TII,
+                                      const RISCVSubtarget &ST,
+                                      MachineRegisterInfo &MRI) {
+  bool MadeChange = false;
+  for (MachineBasicBlock &MBB : MF) {
+    for (MachineInstr &MI : MBB) {
+      unsigned WOpc;
+      int UsersWidth;
+      switch (MI.getOpcode()) {
+      default:
+        continue;
+      case RISCV::LBU:
+        WOpc = RISCV::LB;
+        UsersWidth = 8;
+        break;
+      case RISCV::LHU:
+        WOpc = RISCV::LH;
+        UsersWidth = 16;
+        break;
+      case RISCV::LWU:
+        WOpc = RISCV::LW;
+        UsersWidth = 32;
+        break;
+      }
+
+      if (hasAllNBitUsers(MI, ST, MRI, UsersWidth)) {
+        LLVM_DEBUG(dbgs() << "Replacing " << MI);
+        MI.setDesc(TII.get(WOpc));
+        MI.clearFlag(MachineInstr::MIFlag::NoSWrap);
+        MI.clearFlag(MachineInstr::MIFlag::NoUWrap);
+        MI.clearFlag(MachineInstr::MIFlag::IsExact);
+        LLVM_DEBUG(dbgs() << "     with " << MI);
+        MadeChange = true;
+      }
+    }
+  }
+
+  return MadeChange;
+}
+
 bool RISCVOptWInstrs::runOnMachineFunction(MachineFunction &MF) {
   if (skipFunction(MF.getFunction()))
     return false;
@@ -808,5 +851,7 @@ bool RISCVOptWInstrs::runOnMachineFunction(MachineFunction &MF) {
   if (ST.preferWInst())
     MadeChange |= appendWSuffixes(MF, TII, ST, MRI);
 
+  MadeChange |= convertZExtLoads(MF, TII, ST, MRI);
+
   return MadeChange;
 }
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/double-convert.ll b/llvm/test/CodeGen/RISCV/GlobalISel/double-convert.ll
index a49e94f4bc910..620c5ecc6c1e7 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/double-convert.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/double-convert.ll
@@ -246,17 +246,11 @@ define double @fcvt_d_wu(i32 %a) nounwind {
 }
 
 define double @fcvt_d_wu_load(ptr %p) nounwind {
-; RV32IFD-LABEL: fcvt_d_wu_load:
-; RV32IFD:       # %bb.0:
-; RV32IFD-NEXT:    lw a0, 0(a0)
-; RV32IFD-NEXT:    fcvt.d.wu fa0, a0
-; RV32IFD-NEXT:    ret
-;
-; RV64IFD-LABEL: fcvt_d_wu_load:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    lwu a0, 0(a0)
-; RV64IFD-NEXT:    fcvt.d.wu fa0, a0
-; RV64IFD-NEXT:    ret
+; CHECKIFD-LABEL: fcvt_d_wu_load:
+; CHECKIFD:       # %bb.0:
+; CHECKIFD-NEXT:    lw a0, 0(a0)
+; CHECKIFD-NEXT:    fcvt.d.wu fa0, a0
+; CHECKIFD-NEXT:    ret
 ;
 ; RV32I-LABEL: fcvt_d_wu_load:
 ; RV32I:       # %bb.0:
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/float-convert.ll b/llvm/test/CodeGen/RISCV/GlobalISel/float-convert.ll
index fa093623dd6f8..bbea7929a304e 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/float-convert.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/float-convert.ll
@@ -232,17 +232,11 @@ define float @fcvt_s_wu(i32 %a) nounwind {
 }
 
 define float @fcvt_s_wu_load(ptr %p) nounwind {
-; RV32IF-LABEL: fcvt_s_wu_load:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    lw a0, 0(a0)
-; RV32IF-NEXT:    fcvt.s.wu fa0, a0
-; RV32IF-NEXT:    ret
-;
-; RV64IF-LABEL: fcvt_s_wu_load:
-; RV64IF:       # %bb.0:
-; RV64IF-NEXT:    lwu a0, 0(a0)
-; RV64IF-NEXT:    fcvt.s.wu fa0, a0
-; RV64IF-NEXT:    ret
+; CHECKIF-LABEL: fcvt_s_wu_load:
+; CHECKIF:       # %bb.0:
+; CHECKIF-NEXT:    lw a0, 0(a0)
+; CHECKIF-NEXT:    fcvt.s.wu fa0, a0
+; CHECKIF-NEXT:    ret
 ;
 ; RV32I-LABEL: fcvt_s_wu_load:
 ; RV32I:       # %bb.0:
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll
index 9690302552090..65838f51fc920 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll
@@ -748,7 +748,7 @@ define signext i32 @ctpop_i32_load(ptr %p) nounwind {
 ;
 ; RV64ZBB-LABEL: ctpop_i32_load:
 ; RV64ZBB:       # %bb.0:
-; RV64ZBB-NEXT:    lwu a0, 0(a0)
+; RV64ZBB-NEXT:    lw a0, 0(a0)
 ; RV64ZBB-NEXT:    cpopw a0, a0
 ; RV64ZBB-NEXT:    ret
   %a = load i32, ptr %p
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll
index cd59c9e01806d..ba058ca0b500a 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll
@@ -114,7 +114,7 @@ define i64 @pack_i64_2(i32 signext %a, i32 signext %b) nounwind {
 define i64 @pack_i64_3(ptr %0, ptr %1) {
 ; RV64I-LABEL: pack_i64_3:
 ; RV64I:       # %bb.0:
-; RV64I-NEXT:    lwu a0, 0(a0)
+; RV64I-NEXT:    lw a0, 0(a0)
 ; RV64I-NEXT:    lwu a1, 0(a1)
 ; RV64I-NEXT:    slli a0, a0, 32
 ; RV64I-NEXT:    or a0, a0, a1
@@ -122,8 +122,8 @@ define i64 @pack_i64_3(ptr %0, ptr %1) {
 ;
 ; RV64ZBKB-LABEL: pack_i64_3:
 ; RV64ZBKB:       # %bb.0:
-; RV64ZBKB-NEXT:    lwu a0, 0(a0)
-; RV64ZBKB-NEXT:    lwu a1, 0(a1)
+; RV64ZBKB-NEXT:    lw a0, 0(a0)
+; RV64ZBKB-NEXT:    lw a1, 0(a1)
 ; RV64ZBKB-NEXT:    pack a0, a1, a0
 ; RV64ZBKB-NEXT:    ret
   %3 = load i32, ptr %0, align 4
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/wide-scalar-shift-by-byte-multiple-legalization.ll b/llvm/test/CodeGen/RISCV/GlobalISel/wide-scalar-shift-by-byte-multiple-legalization.ll
index 69519c00f88ea..27c6d0240f987 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/wide-scalar-shift-by-byte-multiple-legalization.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/wide-scalar-shift-by-byte-multiple-legalization.ll
@@ -8,13 +8,13 @@ define void @lshr_4bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a3, 1(a0)
 ; RV64I-NEXT:    lbu a4, 0(a0)
 ; RV64I-NEXT:    lbu a5, 2(a0)
-; RV64I-NEXT:    lbu a0, 3(a0)
+; RV64I-NEXT:    lb a0, 3(a0)
 ; RV64I-NEXT:    slli a3, a3, 8
 ; RV64I-NEXT:    or a3, a3, a4
 ; RV64I-NEXT:    lbu a4, 0(a1)
 ; RV64I-NEXT:    lbu a6, 1(a1)
 ; RV64I-NEXT:    lbu a7, 2(a1)
-; RV64I-NEXT:    lbu a1, 3(a1)
+; RV64I-NEXT:    lb a1, 3(a1)
 ; RV64I-NEXT:    slli a0, a0, 8
 ; RV64I-NEXT:    or a0, a0, a5
 ; RV64I-NEXT:    slli a6, a6, 8
@@ -85,13 +85,13 @@ define void @shl_4bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a3, 1(a0)
 ; RV64I-NEXT:    lbu a4, 0(a0)
 ; RV64I-NEXT:    lbu a5, 2(a0)
-; RV64I-NEXT:    lbu a0, 3(a0)
+; RV64I-NEXT:    lb a0, 3(a0)
 ; RV64I-NEXT:    slli a3, a3, 8
 ; RV64I-NEXT:    or a3, a3, a4
 ; RV64I-NEXT:    lbu a4, 0(a1)
 ; RV64I-NEXT:    lbu a6, 1(a1)
 ; RV64I-NEXT:    lbu a7, 2(a1)
-; RV64I-NEXT:    lbu a1, 3(a1)
+; RV64I-NEXT:    lb a1, 3(a1)
 ; RV64I-NEXT:    slli a0, a0, 8
 ; RV64I-NEXT:    or a0, a0, a5
 ; RV64I-NEXT:    slli a6, a6, 8
@@ -162,13 +162,13 @@ define void @ashr_4bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a3, 1(a0)
 ; RV64I-NEXT:    lbu a4, 0(a0)
 ; RV64I-NEXT:    lbu a5, 2(a0)
-; RV64I-NEXT:    lbu a0, 3(a0)
+; RV64I-NEXT:    lb a0, 3(a0)
 ; RV64I-NEXT:    slli a3, a3, 8
 ; RV64I-NEXT:    or a3, a3, a4
 ; RV64I-NEXT:    lbu a4, 0(a1)
 ; RV64I-NEXT:    lbu a6, 1(a1)
 ; RV64I-NEXT:    lbu a7, 2(a1)
-; RV64I-NEXT:    lbu a1, 3(a1)
+; RV64I-NEXT:    lb a1, 3(a1)
 ; RV64I-NEXT:    slli a0, a0, 8
 ; RV64I-NEXT:    or a0, a0, a5
 ; RV64I-NEXT:    slli a6, a6, 8
@@ -244,25 +244,25 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
-; RV64I-NEXT:    lbu a5, 0(a1)
-; RV64I-NEXT:    lbu a6, 1(a1)
-; RV64I-NEXT:    lbu t2, 2(a1)
-; RV64I-NEXT:    lbu t3, 3(a1)
+; RV64I-NEXT:    lb a5, 0(a1)
+; RV64I-NEXT:    lb a6, 1(a1)
+; RV64I-NEXT:    lb t2, 2(a1)
+; RV64I-NEXT:    lb t3, 3(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli a0, a0, 8
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    or a7, t0, a7
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a5, a6, a5
-; RV64I-NEXT:    lbu a6, 4(a1)
-; RV64I-NEXT:    lbu t0, 5(a1)
-; RV64I-NEXT:    lbu t1, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a6, 4(a1)
+; RV64I-NEXT:    lb t0, 5(a1)
+; RV64I-NEXT:    lb t1, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t3, t3, 8
 ; RV64I-NEXT:    or t2, t3, t2
 ; RV64I-NEXT:    slli t0, t0, 8
@@ -395,25 +395,25 @@ define void @shl_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
-; RV64I-NEXT:    lbu a5, 0(a1)
-; RV64I-NEXT:    lbu a6, 1(a1)
-; RV64I-NEXT:    lbu t2, 2(a1)
-; RV64I-NEXT:    lbu t3, 3(a1)
+; RV64I-NEXT:    lb a5, 0(a1)
+; RV64I-NEXT:    lb a6, 1(a1)
+; RV64I-NEXT:    lb t2, 2(a1)
+; RV64I-NEXT:    lb t3, 3(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli a0, a0, 8
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    or a7, t0, a7
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a5, a6, a5
-; RV64I-NEXT:    lbu a6, 4(a1)
-; RV64I-NEXT:    lbu t0, 5(a1)
-; RV64I-NEXT:    lbu t1, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a6, 4(a1)
+; RV64I-NEXT:    lb t0, 5(a1)
+; RV64I-NEXT:    lb t1, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t3, t3, 8
 ; RV64I-NEXT:    or t2, t3, t2
 ; RV64I-NEXT:    slli t0, t0, 8
@@ -541,25 +541,25 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
-; RV64I-NEXT:    lbu a5, 0(a1)
-; RV64I-NEXT:    lbu a6, 1(a1)
-; RV64I-NEXT:    lbu t2, 2(a1)
-; RV64I-NEXT:    lbu t3, 3(a1)
+; RV64I-NEXT:    lb a5, 0(a1)
+; RV64I-NEXT:    lb a6, 1(a1)
+; RV64I-NEXT:    lb t2, 2(a1)
+; RV64I-NEXT:    lb t3, 3(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli a0, a0, 8
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    or a7, t0, a7
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a5, a6, a5
-; RV64I-NEXT:    lbu a6, 4(a1)
-; RV64I-NEXT:    lbu t0, 5(a1)
-; RV64I-NEXT:    lbu t1, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a6, 4(a1)
+; RV64I-NEXT:    lb t0, 5(a1)
+; RV64I-NEXT:    lb t1, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t3, t3, 8
 ; RV64I-NEXT:    or t2, t3, t2
 ; RV64I-NEXT:    slli t0, t0, 8
@@ -695,7 +695,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -707,7 +707,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a5, 12(a0)
 ; RV64I-NEXT:    lbu a6, 13(a0)
 ; RV64I-NEXT:    lbu s0, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -729,7 +729,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t3, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu s0, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t6, t6, 8
 ; RV64I-NEXT:    or t5, t6, t5
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -1028,7 +1028,7 @@ define void @lshr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -1040,7 +1040,7 @@ define void @lshr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a5, 12(a0)
 ; RV64I-NEXT:    lbu a6, 13(a0)
 ; RV64I-NEXT:    lbu s0, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -1062,7 +1062,7 @@ define void @lshr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu t3, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu s0, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t6, t6, 8
 ; RV64I-NEXT:    or t5, t6, t5
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -1361,7 +1361,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -1373,7 +1373,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a5, 12(a0)
 ; RV64I-NEXT:    lbu a6, 13(a0)
 ; RV64I-NEXT:    lbu s0, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -1395,7 +1395,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t3, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu s0, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t6, t6, 8
 ; RV64I-NEXT:    or t5, t6, t5
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -1690,7 +1690,7 @@ define void @shl_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -1702,7 +1702,7 @@ define void @shl_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a5, 12(a0)
 ; RV64I-NEXT:    lbu a6, 13(a0)
 ; RV64I-NEXT:    lbu s0, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -1724,7 +1724,7 @@ define void @shl_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu t3, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu s0, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t6, t6, 8
 ; RV64I-NEXT:    or t5, t6, t5
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -2020,7 +2020,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -2032,7 +2032,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a5, 12(a0)
 ; RV64I-NEXT:    lbu a6, 13(a0)
 ; RV64I-NEXT:    lbu s0, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -2054,7 +2054,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t3, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu s0, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t6, t6, 8
 ; RV64I-NEXT:    or t5, t6, t5
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -2353,7 +2353,7 @@ define void @ashr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -2365,7 +2365,7 @@ define void @ashr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a5, 12(a0)
 ; RV64I-NEXT:    lbu a6, 13(a0)
 ; RV64I-NEXT:    lbu s0, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -2387,7 +2387,7 @@ define void @ashr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu t3, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu s0, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t6, t6, 8
 ; RV64I-NEXT:    or t5, t6, t5
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -2697,7 +2697,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -2705,7 +2705,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -2719,7 +2719,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -2741,7 +2741,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s2, 28(a0)
 ; RV64I-NEXT:    lbu s3, 29(a0)
 ; RV64I-NEXT:    lbu s4, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
+; RV64I-NEXT:    lb a0, 31(a0)
 ; RV64I-NEXT:    slli s9, s9, 8
 ; RV64I-NEXT:    slli s11, s11, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -2763,7 +2763,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a0, 4(a1)
 ; RV64I-NEXT:    lbu s1, 5(a1)
 ; RV64I-NEXT:    lbu s4, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s8, s8, 8
 ; RV64I-NEXT:    or s7, s8, s7
 ; RV64I-NEXT:    slli s1, s1, 8
@@ -3621,7 +3621,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -3629,7 +3629,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -3643,7 +3643,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -3665,7 +3665,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s2, 28(a0)
 ; RV64I-NEXT:    lbu s3, 29(a0)
 ; RV64I-NEXT:    lbu s4, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
+; RV64I-NEXT:    lb a0, 31(a0)
 ; RV64I-NEXT:    slli s9, s9, 8
 ; RV64I-NEXT:    slli s11, s11, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -3687,7 +3687,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a0, 4(a1)
 ; RV64I-NEXT:    lbu s1, 5(a1)
 ; RV64I-NEXT:    lbu s4, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s8, s8, 8
 ; RV64I-NEXT:    or s7, s8, s7
 ; RV64I-NEXT:    slli s1, s1, 8
@@ -4545,7 +4545,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -4553,7 +4553,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -4567,7 +4567,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -4589,7 +4589,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s2, 28(a0)
 ; RV64I-NEXT:    lbu s3, 29(a0)
 ; RV64I-NEXT:    lbu s4, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
+; RV64I-NEXT:    lb a0, 31(a0)
 ; RV64I-NEXT:    slli s9, s9, 8
 ; RV64I-NEXT:    slli s11, s11, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -4611,7 +4611,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu a0, 4(a1)
 ; RV64I-NEXT:    lbu s1, 5(a1)
 ; RV64I-NEXT:    lbu s4, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s8, s8, 8
 ; RV64I-NEXT:    or s7, s8, s7
 ; RV64I-NEXT:    slli s1, s1, 8
@@ -5470,7 +5470,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -5480,7 +5480,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a4, 12(a0)
 ; RV64I-NEXT:    lbu s0, 13(a0)
 ; RV64I-NEXT:    lbu s1, 14(a0)
-; RV64I-NEXT:    lbu s2, 15(a0)
+; RV64I-NEXT:    lb s2, 15(a0)
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
@@ -5502,7 +5502,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a4, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu t5, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 8
 ; RV64I-NEXT:    slli t3, t3, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -5514,7 +5514,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t5, 19(a0)
 ; RV64I-NEXT:    lbu t4, 21(a0)
 ; RV64I-NEXT:    lbu a4, 22(a0)
-; RV64I-NEXT:    lbu t3, 23(a0)
+; RV64I-NEXT:    lb t3, 23(a0)
 ; RV64I-NEXT:    slli s3, s3, 16
 ; RV64I-NEXT:    slli a7, a7, 16
 ; RV64I-NEXT:    slli t6, t6, 16
@@ -5526,7 +5526,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s1, 27(a0)
 ; RV64I-NEXT:    lbu t6, 29(a0)
 ; RV64I-NEXT:    lbu a3, 30(a0)
-; RV64I-NEXT:    lbu s2, 31(a0)
+; RV64I-NEXT:    lb s2, 31(a0)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    or s5, t1, t0
 ; RV64I-NEXT:    li a7, 128
@@ -6384,7 +6384,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -6394,7 +6394,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a4, 12(a0)
 ; RV64I-NEXT:    lbu s0, 13(a0)
 ; RV64I-NEXT:    lbu s1, 14(a0)
-; RV64I-NEXT:    lbu s2, 15(a0)
+; RV64I-NEXT:    lb s2, 15(a0)
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
@@ -6416,7 +6416,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a4, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu t5, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 8
 ; RV64I-NEXT:    slli t3, t3, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -6428,7 +6428,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu t5, 19(a0)
 ; RV64I-NEXT:    lbu t4, 21(a0)
 ; RV64I-NEXT:    lbu a4, 22(a0)
-; RV64I-NEXT:    lbu t3, 23(a0)
+; RV64I-NEXT:    lb t3, 23(a0)
 ; RV64I-NEXT:    slli s3, s3, 16
 ; RV64I-NEXT:    slli a7, a7, 16
 ; RV64I-NEXT:    slli t6, t6, 16
@@ -6440,7 +6440,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu s1, 27(a0)
 ; RV64I-NEXT:    lbu t6, 29(a0)
 ; RV64I-NEXT:    lbu a3, 30(a0)
-; RV64I-NEXT:    lbu s2, 31(a0)
+; RV64I-NEXT:    lb s2, 31(a0)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    or s5, t1, t0
 ; RV64I-NEXT:    li a7, 128
@@ -7298,7 +7298,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -7308,7 +7308,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu a4, 12(a0)
 ; RV64I-NEXT:    lbu s0, 13(a0)
 ; RV64I-NEXT:    lbu s1, 14(a0)
-; RV64I-NEXT:    lbu s2, 15(a0)
+; RV64I-NEXT:    lb s2, 15(a0)
 ; RV64I-NEXT:    slli a6, a6, 8
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    slli t2, t2, 8
@@ -7330,7 +7330,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu a4, 4(a1)
 ; RV64I-NEXT:    lbu t4, 5(a1)
 ; RV64I-NEXT:    lbu t5, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 8
 ; RV64I-NEXT:    slli t3, t3, 8
 ; RV64I-NEXT:    slli t4, t4, 8
@@ -7342,7 +7342,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu t5, 19(a0)
 ; RV64I-NEXT:    lbu t4, 21(a0)
 ; RV64I-NEXT:    lbu a4, 22(a0)
-; RV64I-NEXT:    lbu t3, 23(a0)
+; RV64I-NEXT:    lb t3, 23(a0)
 ; RV64I-NEXT:    slli s3, s3, 16
 ; RV64I-NEXT:    slli a7, a7, 16
 ; RV64I-NEXT:    slli t6, t6, 16
@@ -7354,7 +7354,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu s1, 27(a0)
 ; RV64I-NEXT:    lbu t6, 29(a0)
 ; RV64I-NEXT:    lbu a3, 30(a0)
-; RV64I-NEXT:    lbu s2, 31(a0)
+; RV64I-NEXT:    lb s2, 31(a0)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    or s5, t1, t0
 ; RV64I-NEXT:    li a7, 128
@@ -8211,7 +8211,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -8219,7 +8219,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -8233,7 +8233,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -8255,7 +8255,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s2, 28(a0)
 ; RV64I-NEXT:    lbu s3, 29(a0)
 ; RV64I-NEXT:    lbu s4, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
+; RV64I-NEXT:    lb a0, 31(a0)
 ; RV64I-NEXT:    slli s9, s9, 8
 ; RV64I-NEXT:    slli s11, s11, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -8277,7 +8277,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a0, 4(a1)
 ; RV64I-NEXT:    lbu s1, 5(a1)
 ; RV64I-NEXT:    lbu s4, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s8, s8, 8
 ; RV64I-NEXT:    or s7, s8, s7
 ; RV64I-NEXT:    slli s1, s1, 8
@@ -9142,7 +9142,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -9150,7 +9150,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -9164,7 +9164,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -9186,7 +9186,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s2, 28(a0)
 ; RV64I-NEXT:    lbu s3, 29(a0)
 ; RV64I-NEXT:    lbu s4, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
+; RV64I-NEXT:    lb a0, 31(a0)
 ; RV64I-NEXT:    slli s9, s9, 8
 ; RV64I-NEXT:    slli s11, s11, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -9208,7 +9208,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a0, 4(a1)
 ; RV64I-NEXT:    lbu s1, 5(a1)
 ; RV64I-NEXT:    lbu s4, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s8, s8, 8
 ; RV64I-NEXT:    or s7, s8, s7
 ; RV64I-NEXT:    slli s1, s1, 8
@@ -10073,7 +10073,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -10081,7 +10081,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -10095,7 +10095,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t2, t2, 8
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -10117,7 +10117,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s2, 28(a0)
 ; RV64I-NEXT:    lbu s3, 29(a0)
 ; RV64I-NEXT:    lbu s4, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
+; RV64I-NEXT:    lb a0, 31(a0)
 ; RV64I-NEXT:    slli s9, s9, 8
 ; RV64I-NEXT:    slli s11, s11, 8
 ; RV64I-NEXT:    slli t6, t6, 8
@@ -10139,7 +10139,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu a0, 4(a1)
 ; RV64I-NEXT:    lbu s1, 5(a1)
 ; RV64I-NEXT:    lbu s4, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s8, s8, 8
 ; RV64I-NEXT:    or s7, s8, s7
 ; RV64I-NEXT:    slli s1, s1, 8
diff --git a/llvm/test/CodeGen/RISCV/atomic-signext.ll b/llvm/test/CodeGen/RISCV/atomic-signext.ll
index bebc097deb192..7d29ac9944834 100644
--- a/llvm/test/CodeGen/RISCV/atomic-signext.ll
+++ b/llvm/test/CodeGen/RISCV/atomic-signext.ll
@@ -4582,7 +4582,7 @@ define signext i32 @atomicrmw_and_i32_monotonic_crossbb(ptr %a, i1 %c) nounwind
 ; RV64I-NEXT:    sext.w a0, a0
 ; RV64I-NEXT:    ret
 ; RV64I-NEXT:  .LBB56_2: # %else
-; RV64I-NEXT:    lwu a1, 0(a0)
+; RV64I-NEXT:    lw a1, 0(a0)
 ; RV64I-NEXT:    andi a2, a1, 1
 ; RV64I-NEXT:    sw a2, 0(a0)
 ; RV64I-NEXT:    sext.w a0, a1
@@ -4700,7 +4700,7 @@ define signext i32 @atomicrmw_nand_i32_monotonic_crossbb(ptr %a, i1 %c) nounwind
 ; RV64I-NEXT:    sext.w a0, a0
 ; RV64I-NEXT:    ret
 ; RV64I-NEXT:  .LBB57_2: # %else
-; RV64I-NEXT:    lwu a1, 0(a0)
+; RV64I-NEXT:    lw a1, 0(a0)
 ; RV64I-NEXT:    andi a2, a1, 1
 ; RV64I-NEXT:    sw a2, 0(a0)
 ; RV64I-NEXT:    sext.w a0, a1
diff --git a/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll b/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll
index 27704d107f93d..352d91b7fd2c3 100644
--- a/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll
+++ b/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll
@@ -786,7 +786,7 @@ define i8 @atomicrmw_usub_sat_i8(ptr %ptr, i8 %val) {
 ; RV64I-NEXT:    .cfi_offset s0, -16
 ; RV64I-NEXT:    .cfi_offset s1, -24
 ; RV64I-NEXT:    mv s0, a0
-; RV64I-NEXT:    lbu a3, 0(a0)
+; RV64I-NEXT:    lb a3, 0(a0)
 ; RV64I-NEXT:    zext.b s1, a1
 ; RV64I-NEXT:  .LBB4_1: # %atomicrmw.start
 ; RV64I-NEXT:    # =>This Inner Loop Header: Depth=1
diff --git a/llvm/test/CodeGen/RISCV/bf16-promote.ll b/llvm/test/CodeGen/RISCV/bf16-promote.ll
index 08c053fab4f67..d19a616b73156 100644
--- a/llvm/test/CodeGen/RISCV/bf16-promote.ll
+++ b/llvm/test/CodeGen/RISCV/bf16-promote.ll
@@ -14,25 +14,40 @@ define void @test_load_store(ptr %p, ptr %q) nounwind {
 }
 
 define float @test_fpextend_float(ptr %p) nounwind {
-; CHECK-LABEL: test_fpextend_float:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lhu a0, 0(a0)
-; CHECK-NEXT:    slli a0, a0, 16
-; CHECK-NEXT:    fmv.w.x fa0, a0
-; CHECK-NEXT:    ret
+; RV64-LABEL: test_fpextend_float:
+; RV64:       # %bb.0:
+; RV64-NEXT:    lh a0, 0(a0)
+; RV64-NEXT:    slli a0, a0, 16
+; RV64-NEXT:    fmv.w.x fa0, a0
+; RV64-NEXT:    ret
+;
+; RV32-LABEL: test_fpextend_float:
+; RV32:       # %bb.0:
+; RV32-NEXT:    lhu a0, 0(a0)
+; RV32-NEXT:    slli a0, a0, 16
+; RV32-NEXT:    fmv.w.x fa0, a0
+; RV32-NEXT:    ret
   %a = load bfloat, ptr %p
   %r = fpext bfloat %a to float
   ret float %r
 }
 
 define double @test_fpextend_double(ptr %p) nounwind {
-; CHECK-LABEL: test_fpextend_double:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lhu a0, 0(a0)
-; CHECK-NEXT:    slli a0, a0, 16
-; CHECK-NEXT:    fmv.w.x fa5, a0
-; CHECK-NEXT:    fcvt.d.s fa0, fa5
-; CHECK-NEXT:    ret
+; RV64-LABEL: test_fpextend_double:
+; RV64:       # %bb.0:
+; RV64-NEXT:    lh a0, 0(a0)
+; RV64-NEXT:    slli a0, a0, 16
+; RV64-NEXT:    fmv.w.x fa5, a0
+; RV64-NEXT:    fcvt.d.s fa0, fa5
+; RV64-NEXT:    ret
+;
+; RV32-LABEL: test_fpextend_double:
+; RV32:       # %bb.0:
+; RV32-NEXT:    lhu a0, 0(a0)
+; RV32-NEXT:    slli a0, a0, 16
+; RV32-NEXT:    fmv.w.x fa5, a0
+; RV32-NEXT:    fcvt.d.s fa0, fa5
+; RV32-NEXT:    ret
   %a = load bfloat, ptr %p
   %r = fpext bfloat %a to double
   ret double %r
@@ -111,8 +126,8 @@ define void @test_fadd(ptr %p, ptr %q) nounwind {
 ; RV64-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
 ; RV64-NEXT:    sd s0, 0(sp) # 8-byte Folded Spill
 ; RV64-NEXT:    mv s0, a0
-; RV64-NEXT:    lhu a0, 0(a1)
-; RV64-NEXT:    lhu a1, 0(s0)
+; RV64-NEXT:    lh a0, 0(a1)
+; RV64-NEXT:    lh a1, 0(s0)
 ; RV64-NEXT:    slli a0, a0, 16
 ; RV64-NEXT:    slli a1, a1, 16
 ; RV64-NEXT:    fmv.w.x fa5, a0
diff --git a/llvm/test/CodeGen/RISCV/bfloat-convert.ll b/llvm/test/CodeGen/RISCV/bfloat-convert.ll
index 3422ea63e748a..6207a17734d62 100644
--- a/llvm/test/CodeGen/RISCV/bfloat-convert.ll
+++ b/llvm/test/CodeGen/RISCV/bfloat-convert.ll
@@ -1074,7 +1074,7 @@ define bfloat @fcvt_bf16_wu_load(ptr %p) nounwind {
 ;
 ; CHECK64ZFBFMIN-LABEL: fcvt_bf16_wu_load:
 ; CHECK64ZFBFMIN:       # %bb.0:
-; CHECK64ZFBFMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64ZFBFMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64ZFBFMIN-NEXT:    fcvt.s.wu fa5, a0
 ; CHECK64ZFBFMIN-NEXT:    fcvt.bf16.s fa0, fa5
 ; CHECK64ZFBFMIN-NEXT:    ret
@@ -1083,7 +1083,7 @@ define bfloat @fcvt_bf16_wu_load(ptr %p) nounwind {
 ; RV64ID:       # %bb.0:
 ; RV64ID-NEXT:    addi sp, sp, -16
 ; RV64ID-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
-; RV64ID-NEXT:    lwu a0, 0(a0)
+; RV64ID-NEXT:    lw a0, 0(a0)
 ; RV64ID-NEXT:    fcvt.s.wu fa0, a0
 ; RV64ID-NEXT:    call __truncsfbf2
 ; RV64ID-NEXT:    fmv.x.w a0, fa0
diff --git a/llvm/test/CodeGen/RISCV/bfloat.ll b/llvm/test/CodeGen/RISCV/bfloat.ll
index c83b0ed6b0eee..420562908773d 100644
--- a/llvm/test/CodeGen/RISCV/bfloat.ll
+++ b/llvm/test/CodeGen/RISCV/bfloat.ll
@@ -466,8 +466,8 @@ define bfloat @bfloat_load(ptr %a) nounwind {
 ; RV64ID-LP64:       # %bb.0:
 ; RV64ID-LP64-NEXT:    addi sp, sp, -16
 ; RV64ID-LP64-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
-; RV64ID-LP64-NEXT:    lhu a1, 6(a0)
-; RV64ID-LP64-NEXT:    lhu a0, 0(a0)
+; RV64ID-LP64-NEXT:    lh a1, 6(a0)
+; RV64ID-LP64-NEXT:    lh a0, 0(a0)
 ; RV64ID-LP64-NEXT:    slli a1, a1, 16
 ; RV64ID-LP64-NEXT:    slli a0, a0, 16
 ; RV64ID-LP64-NEXT:    fmv.w.x fa5, a1
@@ -505,8 +505,8 @@ define bfloat @bfloat_load(ptr %a) nounwind {
 ; RV64ID-LP64D:       # %bb.0:
 ; RV64ID-LP64D-NEXT:    addi sp, sp, -16
 ; RV64ID-LP64D-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
-; RV64ID-LP64D-NEXT:    lhu a1, 6(a0)
-; RV64ID-LP64D-NEXT:    lhu a0, 0(a0)
+; RV64ID-LP64D-NEXT:    lh a1, 6(a0)
+; RV64ID-LP64D-NEXT:    lh a0, 0(a0)
 ; RV64ID-LP64D-NEXT:    slli a1, a1, 16
 ; RV64ID-LP64D-NEXT:    slli a0, a0, 16
 ; RV64ID-LP64D-NEXT:    fmv.w.x fa5, a1
diff --git a/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll b/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll
index 637fb314e9536..4348489e8ea53 100644
--- a/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll
+++ b/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll
@@ -96,7 +96,7 @@ define signext i32 @ctz_dereferencing_pointer(ptr %b) nounwind {
 ; RV64I-NEXT:    lui a1, %hi(.LCPI0_1)
 ; RV64I-NEXT:    addi a1, a1, %lo(.LCPI0_1)
 ; RV64I-NEXT:    add a0, a1, a0
-; RV64I-NEXT:    lbu a0, 0(a0)
+; RV64I-NEXT:    lb a0, 0(a0)
 ; RV64I-NEXT:    seqz a1, s0
 ; RV64I-NEXT:    addi a1, a1, -1
 ; RV64I-NEXT:    and a0, a0, a1
@@ -189,7 +189,7 @@ define i64 @ctz_dereferencing_pointer_zext(ptr %b) nounwind {
 ; RV64I-NEXT:    lui a2, %hi(.LCPI1_0)
 ; RV64I-NEXT:    addi a2, a2, %lo(.LCPI1_0)
 ; RV64I-NEXT:    add a1, a2, a1
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    seqz a0, a0
 ; RV64I-NEXT:    addi a0, a0, -1
 ; RV64I-NEXT:    and a0, a1, a0
@@ -274,7 +274,7 @@ define signext i32 @ctz1(i32 signext %x) nounwind {
 ; RV64I-NEXT:    lui a2, %hi(.LCPI2_0)
 ; RV64I-NEXT:    addi a2, a2, %lo(.LCPI2_0)
 ; RV64I-NEXT:    add a1, a2, a1
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    seqz a0, a0
 ; RV64I-NEXT:    addi a0, a0, -1
 ; RV64I-NEXT:    and a0, a1, a0
@@ -357,7 +357,7 @@ define signext i32 @ctz1_flipped(i32 signext %x) nounwind {
 ; RV64I-NEXT:    lui a2, %hi(.LCPI3_0)
 ; RV64I-NEXT:    addi a2, a2, %lo(.LCPI3_0)
 ; RV64I-NEXT:    add a1, a2, a1
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    seqz a0, a0
 ; RV64I-NEXT:    addi a0, a0, -1
 ; RV64I-NEXT:    and a0, a1, a0
@@ -616,7 +616,7 @@ define signext i32 @ctz4(i64 %b) nounwind {
 ; RV64I-NEXT:    lui a1, %hi(.LCPI6_1)
 ; RV64I-NEXT:    addi a1, a1, %lo(.LCPI6_1)
 ; RV64I-NEXT:    add a0, a1, a0
-; RV64I-NEXT:    lbu a0, 0(a0)
+; RV64I-NEXT:    lb a0, 0(a0)
 ; RV64I-NEXT:    seqz a1, s0
 ; RV64I-NEXT:    addi a1, a1, -1
 ; RV64I-NEXT:    and a0, a0, a1
@@ -850,7 +850,7 @@ define signext i32 @ctz5(i32 signext %x) nounwind {
 ; RV64I-NEXT:    lui a2, %hi(.LCPI8_0)
 ; RV64I-NEXT:    addi a2, a2, %lo(.LCPI8_0)
 ; RV64I-NEXT:    add a1, a2, a1
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    seqz a0, a0
 ; RV64I-NEXT:    addi a0, a0, -1
 ; RV64I-NEXT:    and a0, a1, a0
@@ -933,7 +933,7 @@ define signext i32 @ctz6(i32 signext %x) nounwind {
 ; RV64I-NEXT:    lui a2, %hi(.LCPI9_0)
 ; RV64I-NEXT:    addi a2, a2, %lo(.LCPI9_0)
 ; RV64I-NEXT:    add a1, a2, a1
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    seqz a0, a0
 ; RV64I-NEXT:    addi a0, a0, -1
 ; RV64I-NEXT:    and a0, a1, a0
@@ -1023,7 +1023,7 @@ define signext i32 @globalVar() nounwind {
 ; RV64I-NEXT:    lui a2, %hi(.LCPI10_0)
 ; RV64I-NEXT:    addi a2, a2, %lo(.LCPI10_0)
 ; RV64I-NEXT:    add a1, a2, a1
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    seqz a0, a0
 ; RV64I-NEXT:    addi a0, a0, -1
 ; RV64I-NEXT:    and a0, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/double-convert-strict.ll b/llvm/test/CodeGen/RISCV/double-convert-strict.ll
index 2b1ec10fcaf17..9a5e357b05a17 100644
--- a/llvm/test/CodeGen/RISCV/double-convert-strict.ll
+++ b/llvm/test/CodeGen/RISCV/double-convert-strict.ll
@@ -347,17 +347,11 @@ define double @fcvt_d_wu(i32 %a) nounwind strictfp {
 declare double @llvm.experimental.constrained.uitofp.f64.i32(i32, metadata, metadata)
 
 define double @fcvt_d_wu_load(ptr %p) nounwind strictfp {
-; RV32IFD-LABEL: fcvt_d_wu_load:
-; RV32IFD:       # %bb.0:
-; RV32IFD-NEXT:    lw a0, 0(a0)
-; RV32IFD-NEXT:    fcvt.d.wu fa0, a0
-; RV32IFD-NEXT:    ret
-;
-; RV64IFD-LABEL: fcvt_d_wu_load:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    lwu a0, 0(a0)
-; RV64IFD-NEXT:    fcvt.d.wu fa0, a0
-; RV64IFD-NEXT:    ret
+; CHECKIFD-LABEL: fcvt_d_wu_load:
+; CHECKIFD:       # %bb.0:
+; CHECKIFD-NEXT:    lw a0, 0(a0)
+; CHECKIFD-NEXT:    fcvt.d.wu fa0, a0
+; CHECKIFD-NEXT:    ret
 ;
 ; RV32IZFINXZDINX-LABEL: fcvt_d_wu_load:
 ; RV32IZFINXZDINX:       # %bb.0:
@@ -367,7 +361,7 @@ define double @fcvt_d_wu_load(ptr %p) nounwind strictfp {
 ;
 ; RV64IZFINXZDINX-LABEL: fcvt_d_wu_load:
 ; RV64IZFINXZDINX:       # %bb.0:
-; RV64IZFINXZDINX-NEXT:    lwu a0, 0(a0)
+; RV64IZFINXZDINX-NEXT:    lw a0, 0(a0)
 ; RV64IZFINXZDINX-NEXT:    fcvt.d.wu a0, a0
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
diff --git a/llvm/test/CodeGen/RISCV/double-convert.ll b/llvm/test/CodeGen/RISCV/double-convert.ll
index fad9e21fee72b..a2e6186e051bf 100644
--- a/llvm/test/CodeGen/RISCV/double-convert.ll
+++ b/llvm/test/CodeGen/RISCV/double-convert.ll
@@ -582,17 +582,11 @@ define double @fcvt_d_wu(i32 %a) nounwind {
 }
 
 define double @fcvt_d_wu_load(ptr %p) nounwind {
-; RV32IFD-LABEL: fcvt_d_wu_load:
-; RV32IFD:       # %bb.0:
-; RV32IFD-NEXT:    lw a0, 0(a0)
-; RV32IFD-NEXT:    fcvt.d.wu fa0, a0
-; RV32IFD-NEXT:    ret
-;
-; RV64IFD-LABEL: fcvt_d_wu_load:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    lwu a0, 0(a0)
-; RV64IFD-NEXT:    fcvt.d.wu fa0, a0
-; RV64IFD-NEXT:    ret
+; CHECKIFD-LABEL: fcvt_d_wu_load:
+; CHECKIFD:       # %bb.0:
+; CHECKIFD-NEXT:    lw a0, 0(a0)
+; CHECKIFD-NEXT:    fcvt.d.wu fa0, a0
+; CHECKIFD-NEXT:    ret
 ;
 ; RV32IZFINXZDINX-LABEL: fcvt_d_wu_load:
 ; RV32IZFINXZDINX:       # %bb.0:
@@ -602,7 +596,7 @@ define double @fcvt_d_wu_load(ptr %p) nounwind {
 ;
 ; RV64IZFINXZDINX-LABEL: fcvt_d_wu_load:
 ; RV64IZFINXZDINX:       # %bb.0:
-; RV64IZFINXZDINX-NEXT:    lwu a0, 0(a0)
+; RV64IZFINXZDINX-NEXT:    lw a0, 0(a0)
 ; RV64IZFINXZDINX-NEXT:    fcvt.d.wu a0, a0
 ; RV64IZFINXZDINX-NEXT:    ret
 ;
diff --git a/llvm/test/CodeGen/RISCV/float-convert-strict.ll b/llvm/test/CodeGen/RISCV/float-convert-strict.ll
index 0c265e11652a2..1b25a2b64f4d3 100644
--- a/llvm/test/CodeGen/RISCV/float-convert-strict.ll
+++ b/llvm/test/CodeGen/RISCV/float-convert-strict.ll
@@ -236,29 +236,17 @@ define float @fcvt_s_wu(i32 %a) nounwind strictfp {
 declare float @llvm.experimental.constrained.uitofp.f32.i32(i32 %a, metadata, metadata)
 
 define float @fcvt_s_wu_load(ptr %p) nounwind strictfp {
-; RV32IF-LABEL: fcvt_s_wu_load:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    lw a0, 0(a0)
-; RV32IF-NEXT:    fcvt.s.wu fa0, a0
-; RV32IF-NEXT:    ret
-;
-; RV64IF-LABEL: fcvt_s_wu_load:
-; RV64IF:       # %bb.0:
-; RV64IF-NEXT:    lwu a0, 0(a0)
-; RV64IF-NEXT:    fcvt.s.wu fa0, a0
-; RV64IF-NEXT:    ret
-;
-; RV32IZFINX-LABEL: fcvt_s_wu_load:
-; RV32IZFINX:       # %bb.0:
-; RV32IZFINX-NEXT:    lw a0, 0(a0)
-; RV32IZFINX-NEXT:    fcvt.s.wu a0, a0
-; RV32IZFINX-NEXT:    ret
+; CHECKIF-LABEL: fcvt_s_wu_load:
+; CHECKIF:       # %bb.0:
+; CHECKIF-NEXT:    lw a0, 0(a0)
+; CHECKIF-NEXT:    fcvt.s.wu fa0, a0
+; CHECKIF-NEXT:    ret
 ;
-; RV64IZFINX-LABEL: fcvt_s_wu_load:
-; RV64IZFINX:       # %bb.0:
-; RV64IZFINX-NEXT:    lwu a0, 0(a0)
-; RV64IZFINX-NEXT:    fcvt.s.wu a0, a0
-; RV64IZFINX-NEXT:    ret
+; CHECKIZFINX-LABEL: fcvt_s_wu_load:
+; CHECKIZFINX:       # %bb.0:
+; CHECKIZFINX-NEXT:    lw a0, 0(a0)
+; CHECKIZFINX-NEXT:    fcvt.s.wu a0, a0
+; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcvt_s_wu_load:
 ; RV32I:       # %bb.0:
diff --git a/llvm/test/CodeGen/RISCV/float-convert.ll b/llvm/test/CodeGen/RISCV/float-convert.ll
index 797d59dfcdc86..c9d33a414b855 100644
--- a/llvm/test/CodeGen/RISCV/float-convert.ll
+++ b/llvm/test/CodeGen/RISCV/float-convert.ll
@@ -482,29 +482,17 @@ define float @fcvt_s_wu(i32 %a) nounwind {
 }
 
 define float @fcvt_s_wu_load(ptr %p) nounwind {
-; RV32IF-LABEL: fcvt_s_wu_load:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    lw a0, 0(a0)
-; RV32IF-NEXT:    fcvt.s.wu fa0, a0
-; RV32IF-NEXT:    ret
-;
-; RV64IF-LABEL: fcvt_s_wu_load:
-; RV64IF:       # %bb.0:
-; RV64IF-NEXT:    lwu a0, 0(a0)
-; RV64IF-NEXT:    fcvt.s.wu fa0, a0
-; RV64IF-NEXT:    ret
-;
-; RV32IZFINX-LABEL: fcvt_s_wu_load:
-; RV32IZFINX:       # %bb.0:
-; RV32IZFINX-NEXT:    lw a0, 0(a0)
-; RV32IZFINX-NEXT:    fcvt.s.wu a0, a0
-; RV32IZFINX-NEXT:    ret
+; CHECKIF-LABEL: fcvt_s_wu_load:
+; CHECKIF:       # %bb.0:
+; CHECKIF-NEXT:    lw a0, 0(a0)
+; CHECKIF-NEXT:    fcvt.s.wu fa0, a0
+; CHECKIF-NEXT:    ret
 ;
-; RV64IZFINX-LABEL: fcvt_s_wu_load:
-; RV64IZFINX:       # %bb.0:
-; RV64IZFINX-NEXT:    lwu a0, 0(a0)
-; RV64IZFINX-NEXT:    fcvt.s.wu a0, a0
-; RV64IZFINX-NEXT:    ret
+; CHECKIZFINX-LABEL: fcvt_s_wu_load:
+; CHECKIZFINX:       # %bb.0:
+; CHECKIZFINX-NEXT:    lw a0, 0(a0)
+; CHECKIZFINX-NEXT:    fcvt.s.wu a0, a0
+; CHECKIZFINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcvt_s_wu_load:
 ; RV32I:       # %bb.0:
diff --git a/llvm/test/CodeGen/RISCV/fold-mem-offset.ll b/llvm/test/CodeGen/RISCV/fold-mem-offset.ll
index e6f1a08cc1879..34ca7322b05ac 100644
--- a/llvm/test/CodeGen/RISCV/fold-mem-offset.ll
+++ b/llvm/test/CodeGen/RISCV/fold-mem-offset.ll
@@ -176,25 +176,45 @@ entry:
 }
 
 define zeroext i8 @test_add(ptr %p, iXLen %x, iXLen %y) {
-; CHECK-LABEL: test_add:
-; CHECK:       # %bb.0: # %entry
-; CHECK-NEXT:    add a1, a0, a1
-; CHECK-NEXT:    add a0, a2, a0
-; CHECK-NEXT:    lbu a1, 1800(a1)
-; CHECK-NEXT:    lbu a0, 1810(a0)
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    zext.b a0, a0
-; CHECK-NEXT:    ret
-;
-; ZBA-LABEL: test_add:
-; ZBA:       # %bb.0: # %entry
-; ZBA-NEXT:    add a1, a0, a1
-; ZBA-NEXT:    add a0, a2, a0
-; ZBA-NEXT:    lbu a1, 1800(a1)
-; ZBA-NEXT:    lbu a0, 1810(a0)
-; ZBA-NEXT:    add a0, a0, a1
-; ZBA-NEXT:    zext.b a0, a0
-; ZBA-NEXT:    ret
+; RV32I-LABEL: test_add:
+; RV32I:       # %bb.0: # %entry
+; RV32I-NEXT:    add a1, a0, a1
+; RV32I-NEXT:    add a0, a2, a0
+; RV32I-NEXT:    lbu a1, 1800(a1)
+; RV32I-NEXT:    lbu a0, 1810(a0)
+; RV32I-NEXT:    add a0, a0, a1
+; RV32I-NEXT:    zext.b a0, a0
+; RV32I-NEXT:    ret
+;
+; RV64I-LABEL: test_add:
+; RV64I:       # %bb.0: # %entry
+; RV64I-NEXT:    add a1, a0, a1
+; RV64I-NEXT:    add a0, a2, a0
+; RV64I-NEXT:    lb a1, 1800(a1)
+; RV64I-NEXT:    lb a0, 1810(a0)
+; RV64I-NEXT:    add a0, a0, a1
+; RV64I-NEXT:    zext.b a0, a0
+; RV64I-NEXT:    ret
+;
+; RV32ZBA-LABEL: test_add:
+; RV32ZBA:       # %bb.0: # %entry
+; RV32ZBA-NEXT:    add a1, a0, a1
+; RV32ZBA-NEXT:    add a0, a2, a0
+; RV32ZBA-NEXT:    lbu a1, 1800(a1)
+; RV32ZBA-NEXT:    lbu a0, 1810(a0)
+; RV32ZBA-NEXT:    add a0, a0, a1
+; RV32ZBA-NEXT:    zext.b a0, a0
+; RV32ZBA-NEXT:    ret
+;
+; RV64ZBA-LABEL: test_add:
+; RV64ZBA:       # %bb.0: # %entry
+; RV64ZBA-NEXT:    add a1, a0, a1
+; RV64ZBA-NEXT:    add a0, a2, a0
+; RV64ZBA-NEXT:    lb a1, 1800(a1)
+; RV64ZBA-NEXT:    lb a0, 1810(a0)
+; RV64ZBA-NEXT:    add a0, a0, a1
+; RV64ZBA-NEXT:    zext.b a0, a0
+; RV64ZBA-NEXT:    ret
 entry:
   %e = getelementptr inbounds nuw i8, ptr %p, i64 1800
   %arrayidx = getelementptr inbounds nuw [1000 x i8], ptr %e, i64 0, iXLen %x
@@ -408,8 +428,8 @@ define zeroext i8 @test_add_uw(ptr %p, i32 signext %x, i32 signext %y) {
 ; RV64I-NEXT:    srli a2, a2, 32
 ; RV64I-NEXT:    add a1, a0, a1
 ; RV64I-NEXT:    add a0, a0, a2
-; RV64I-NEXT:    lbu a1, 1800(a1)
-; RV64I-NEXT:    lbu a0, 1800(a0)
+; RV64I-NEXT:    lb a1, 1800(a1)
+; RV64I-NEXT:    lb a0, 1800(a0)
 ; RV64I-NEXT:    add a0, a0, a1
 ; RV64I-NEXT:    zext.b a0, a0
 ; RV64I-NEXT:    ret
@@ -428,8 +448,8 @@ define zeroext i8 @test_add_uw(ptr %p, i32 signext %x, i32 signext %y) {
 ; RV64ZBA:       # %bb.0: # %entry
 ; RV64ZBA-NEXT:    add.uw a1, a1, a0
 ; RV64ZBA-NEXT:    add.uw a0, a2, a0
-; RV64ZBA-NEXT:    lbu a1, 1800(a1)
-; RV64ZBA-NEXT:    lbu a0, 1800(a0)
+; RV64ZBA-NEXT:    lb a1, 1800(a1)
+; RV64ZBA-NEXT:    lb a0, 1800(a0)
 ; RV64ZBA-NEXT:    add a0, a0, a1
 ; RV64ZBA-NEXT:    zext.b a0, a0
 ; RV64ZBA-NEXT:    ret
@@ -666,27 +686,49 @@ entry:
 }
 
 define zeroext i8 @test_optsize(ptr %p, iXLen %x, iXLen %y) optsize {
-; CHECK-LABEL: test_optsize:
-; CHECK:       # %bb.0: # %entry
-; CHECK-NEXT:    addi a0, a0, 1800
-; CHECK-NEXT:    add a1, a0, a1
-; CHECK-NEXT:    add a0, a2, a0
-; CHECK-NEXT:    lbu a1, 0(a1)
-; CHECK-NEXT:    lbu a0, 10(a0)
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    zext.b a0, a0
-; CHECK-NEXT:    ret
-;
-; ZBA-LABEL: test_optsize:
-; ZBA:       # %bb.0: # %entry
-; ZBA-NEXT:    addi a0, a0, 1800
-; ZBA-NEXT:    add a1, a0, a1
-; ZBA-NEXT:    add a0, a2, a0
-; ZBA-NEXT:    lbu a1, 0(a1)
-; ZBA-NEXT:    lbu a0, 10(a0)
-; ZBA-NEXT:    add a0, a0, a1
-; ZBA-NEXT:    zext.b a0, a0
-; ZBA-NEXT:    ret
+; RV32I-LABEL: test_optsize:
+; RV32I:       # %bb.0: # %entry
+; RV32I-NEXT:    addi a0, a0, 1800
+; RV32I-NEXT:    add a1, a0, a1
+; RV32I-NEXT:    add a0, a2, a0
+; RV32I-NEXT:    lbu a1, 0(a1)
+; RV32I-NEXT:    lbu a0, 10(a0)
+; RV32I-NEXT:    add a0, a0, a1
+; RV32I-NEXT:    zext.b a0, a0
+; RV32I-NEXT:    ret
+;
+; RV64I-LABEL: test_optsize:
+; RV64I:       # %bb.0: # %entry
+; RV64I-NEXT:    addi a0, a0, 1800
+; RV64I-NEXT:    add a1, a0, a1
+; RV64I-NEXT:    add a0, a2, a0
+; RV64I-NEXT:    lb a1, 0(a1)
+; RV64I-NEXT:    lb a0, 10(a0)
+; RV64I-NEXT:    add a0, a0, a1
+; RV64I-NEXT:    zext.b a0, a0
+; RV64I-NEXT:    ret
+;
+; RV32ZBA-LABEL: test_optsize:
+; RV32ZBA:       # %bb.0: # %entry
+; RV32ZBA-NEXT:    addi a0, a0, 1800
+; RV32ZBA-NEXT:    add a1, a0, a1
+; RV32ZBA-NEXT:    add a0, a2, a0
+; RV32ZBA-NEXT:    lbu a1, 0(a1)
+; RV32ZBA-NEXT:    lbu a0, 10(a0)
+; RV32ZBA-NEXT:    add a0, a0, a1
+; RV32ZBA-NEXT:    zext.b a0, a0
+; RV32ZBA-NEXT:    ret
+;
+; RV64ZBA-LABEL: test_optsize:
+; RV64ZBA:       # %bb.0: # %entry
+; RV64ZBA-NEXT:    addi a0, a0, 1800
+; RV64ZBA-NEXT:    add a1, a0, a1
+; RV64ZBA-NEXT:    add a0, a2, a0
+; RV64ZBA-NEXT:    lb a1, 0(a1)
+; RV64ZBA-NEXT:    lb a0, 10(a0)
+; RV64ZBA-NEXT:    add a0, a0, a1
+; RV64ZBA-NEXT:    zext.b a0, a0
+; RV64ZBA-NEXT:    ret
 entry:
   %e = getelementptr inbounds nuw i8, ptr %p, i64 1800
   %arrayidx = getelementptr inbounds nuw [1000 x i8], ptr %e, i64 0, iXLen %x
@@ -699,27 +741,49 @@ entry:
 }
 
 define zeroext i8 @test_minsize(ptr %p, iXLen %x, iXLen %y) minsize {
-; CHECK-LABEL: test_minsize:
-; CHECK:       # %bb.0: # %entry
-; CHECK-NEXT:    addi a0, a0, 1800
-; CHECK-NEXT:    add a1, a0, a1
-; CHECK-NEXT:    add a0, a2, a0
-; CHECK-NEXT:    lbu a1, 0(a1)
-; CHECK-NEXT:    lbu a0, 10(a0)
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    zext.b a0, a0
-; CHECK-NEXT:    ret
-;
-; ZBA-LABEL: test_minsize:
-; ZBA:       # %bb.0: # %entry
-; ZBA-NEXT:    addi a0, a0, 1800
-; ZBA-NEXT:    add a1, a0, a1
-; ZBA-NEXT:    add a0, a2, a0
-; ZBA-NEXT:    lbu a1, 0(a1)
-; ZBA-NEXT:    lbu a0, 10(a0)
-; ZBA-NEXT:    add a0, a0, a1
-; ZBA-NEXT:    zext.b a0, a0
-; ZBA-NEXT:    ret
+; RV32I-LABEL: test_minsize:
+; RV32I:       # %bb.0: # %entry
+; RV32I-NEXT:    addi a0, a0, 1800
+; RV32I-NEXT:    add a1, a0, a1
+; RV32I-NEXT:    add a0, a2, a0
+; RV32I-NEXT:    lbu a1, 0(a1)
+; RV32I-NEXT:    lbu a0, 10(a0)
+; RV32I-NEXT:    add a0, a0, a1
+; RV32I-NEXT:    zext.b a0, a0
+; RV32I-NEXT:    ret
+;
+; RV64I-LABEL: test_minsize:
+; RV64I:       # %bb.0: # %entry
+; RV64I-NEXT:    addi a0, a0, 1800
+; RV64I-NEXT:    add a1, a0, a1
+; RV64I-NEXT:    add a0, a2, a0
+; RV64I-NEXT:    lb a1, 0(a1)
+; RV64I-NEXT:    lb a0, 10(a0)
+; RV64I-NEXT:    add a0, a0, a1
+; RV64I-NEXT:    zext.b a0, a0
+; RV64I-NEXT:    ret
+;
+; RV32ZBA-LABEL: test_minsize:
+; RV32ZBA:       # %bb.0: # %entry
+; RV32ZBA-NEXT:    addi a0, a0, 1800
+; RV32ZBA-NEXT:    add a1, a0, a1
+; RV32ZBA-NEXT:    add a0, a2, a0
+; RV32ZBA-NEXT:    lbu a1, 0(a1)
+; RV32ZBA-NEXT:    lbu a0, 10(a0)
+; RV32ZBA-NEXT:    add a0, a0, a1
+; RV32ZBA-NEXT:    zext.b a0, a0
+; RV32ZBA-NEXT:    ret
+;
+; RV64ZBA-LABEL: test_minsize:
+; RV64ZBA:       # %bb.0: # %entry
+; RV64ZBA-NEXT:    addi a0, a0, 1800
+; RV64ZBA-NEXT:    add a1, a0, a1
+; RV64ZBA-NEXT:    add a0, a2, a0
+; RV64ZBA-NEXT:    lb a1, 0(a1)
+; RV64ZBA-NEXT:    lb a0, 10(a0)
+; RV64ZBA-NEXT:    add a0, a0, a1
+; RV64ZBA-NEXT:    zext.b a0, a0
+; RV64ZBA-NEXT:    ret
 entry:
   %e = getelementptr inbounds nuw i8, ptr %p, i64 1800
   %arrayidx = getelementptr inbounds nuw [1000 x i8], ptr %e, i64 0, iXLen %x
@@ -730,3 +794,6 @@ entry:
   %add4 = add i8 %1, %0
   ret i8 %add4
 }
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; CHECK: {{.*}}
+; ZBA: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/half-arith.ll b/llvm/test/CodeGen/RISCV/half-arith.ll
index 84163b52bb98d..ccff96b11e4ff 100644
--- a/llvm/test/CodeGen/RISCV/half-arith.ll
+++ b/llvm/test/CodeGen/RISCV/half-arith.ll
@@ -2903,7 +2903,7 @@ define half @fsgnjx_f16(half %x, half %y) nounwind {
 ; RV64IZFHMIN-LABEL: fsgnjx_f16:
 ; RV64IZFHMIN:       # %bb.0:
 ; RV64IZFHMIN-NEXT:    lui a0, %hi(.LCPI23_0)
-; RV64IZFHMIN-NEXT:    lhu a0, %lo(.LCPI23_0)(a0)
+; RV64IZFHMIN-NEXT:    lh a0, %lo(.LCPI23_0)(a0)
 ; RV64IZFHMIN-NEXT:    fmv.x.h a1, fa0
 ; RV64IZFHMIN-NEXT:    lui a2, 1048568
 ; RV64IZFHMIN-NEXT:    and a1, a1, a2
diff --git a/llvm/test/CodeGen/RISCV/half-convert-strict.ll b/llvm/test/CodeGen/RISCV/half-convert-strict.ll
index 0a04d44893e75..675e230816f37 100644
--- a/llvm/test/CodeGen/RISCV/half-convert-strict.ll
+++ b/llvm/test/CodeGen/RISCV/half-convert-strict.ll
@@ -1461,29 +1461,17 @@ define half @fcvt_h_wu(i32 %a) nounwind strictfp {
 declare half @llvm.experimental.constrained.uitofp.f16.i32(i32, metadata, metadata)
 
 define half @fcvt_h_wu_load(ptr %p) nounwind strictfp {
-; RV32IZFH-LABEL: fcvt_h_wu_load:
-; RV32IZFH:       # %bb.0:
-; RV32IZFH-NEXT:    lw a0, 0(a0)
-; RV32IZFH-NEXT:    fcvt.h.wu fa0, a0
-; RV32IZFH-NEXT:    ret
-;
-; RV64IZFH-LABEL: fcvt_h_wu_load:
-; RV64IZFH:       # %bb.0:
-; RV64IZFH-NEXT:    lwu a0, 0(a0)
-; RV64IZFH-NEXT:    fcvt.h.wu fa0, a0
-; RV64IZFH-NEXT:    ret
-;
-; RV32IZHINX-LABEL: fcvt_h_wu_load:
-; RV32IZHINX:       # %bb.0:
-; RV32IZHINX-NEXT:    lw a0, 0(a0)
-; RV32IZHINX-NEXT:    fcvt.h.wu a0, a0
-; RV32IZHINX-NEXT:    ret
+; CHECKIZFH-LABEL: fcvt_h_wu_load:
+; CHECKIZFH:       # %bb.0:
+; CHECKIZFH-NEXT:    lw a0, 0(a0)
+; CHECKIZFH-NEXT:    fcvt.h.wu fa0, a0
+; CHECKIZFH-NEXT:    ret
 ;
-; RV64IZHINX-LABEL: fcvt_h_wu_load:
-; RV64IZHINX:       # %bb.0:
-; RV64IZHINX-NEXT:    lwu a0, 0(a0)
-; RV64IZHINX-NEXT:    fcvt.h.wu a0, a0
-; RV64IZHINX-NEXT:    ret
+; CHECKIZHINX-LABEL: fcvt_h_wu_load:
+; CHECKIZHINX:       # %bb.0:
+; CHECKIZHINX-NEXT:    lw a0, 0(a0)
+; CHECKIZHINX-NEXT:    fcvt.h.wu a0, a0
+; CHECKIZHINX-NEXT:    ret
 ;
 ; RV32IDZFH-LABEL: fcvt_h_wu_load:
 ; RV32IDZFH:       # %bb.0:
@@ -1493,7 +1481,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind strictfp {
 ;
 ; RV64IDZFH-LABEL: fcvt_h_wu_load:
 ; RV64IDZFH:       # %bb.0:
-; RV64IDZFH-NEXT:    lwu a0, 0(a0)
+; RV64IDZFH-NEXT:    lw a0, 0(a0)
 ; RV64IDZFH-NEXT:    fcvt.h.wu fa0, a0
 ; RV64IDZFH-NEXT:    ret
 ;
@@ -1505,7 +1493,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind strictfp {
 ;
 ; RV64IZDINXZHINX-LABEL: fcvt_h_wu_load:
 ; RV64IZDINXZHINX:       # %bb.0:
-; RV64IZDINXZHINX-NEXT:    lwu a0, 0(a0)
+; RV64IZDINXZHINX-NEXT:    lw a0, 0(a0)
 ; RV64IZDINXZHINX-NEXT:    fcvt.h.wu a0, a0
 ; RV64IZDINXZHINX-NEXT:    ret
 ;
@@ -1518,7 +1506,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind strictfp {
 ;
 ; CHECK64-IZFHMIN-LABEL: fcvt_h_wu_load:
 ; CHECK64-IZFHMIN:       # %bb.0:
-; CHECK64-IZFHMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64-IZFHMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64-IZFHMIN-NEXT:    fcvt.s.wu fa5, a0
 ; CHECK64-IZFHMIN-NEXT:    fcvt.h.s fa0, fa5
 ; CHECK64-IZFHMIN-NEXT:    ret
@@ -1532,7 +1520,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind strictfp {
 ;
 ; CHECK64-IZHINXMIN-LABEL: fcvt_h_wu_load:
 ; CHECK64-IZHINXMIN:       # %bb.0:
-; CHECK64-IZHINXMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64-IZHINXMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64-IZHINXMIN-NEXT:    fcvt.s.wu a0, a0
 ; CHECK64-IZHINXMIN-NEXT:    fcvt.h.s a0, a0
 ; CHECK64-IZHINXMIN-NEXT:    ret
@@ -1546,7 +1534,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind strictfp {
 ;
 ; CHECK64-IZDINXZHINXMIN-LABEL: fcvt_h_wu_load:
 ; CHECK64-IZDINXZHINXMIN:       # %bb.0:
-; CHECK64-IZDINXZHINXMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64-IZDINXZHINXMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64-IZDINXZHINXMIN-NEXT:    fcvt.s.wu a0, a0
 ; CHECK64-IZDINXZHINXMIN-NEXT:    fcvt.h.s a0, a0
 ; CHECK64-IZDINXZHINXMIN-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/half-convert.ll b/llvm/test/CodeGen/RISCV/half-convert.ll
index c53237ed6aef7..facb544fb52b6 100644
--- a/llvm/test/CodeGen/RISCV/half-convert.ll
+++ b/llvm/test/CodeGen/RISCV/half-convert.ll
@@ -4388,17 +4388,11 @@ define half @fcvt_h_wu(i32 %a) nounwind {
 }
 
 define half @fcvt_h_wu_load(ptr %p) nounwind {
-; RV32IZFH-LABEL: fcvt_h_wu_load:
-; RV32IZFH:       # %bb.0:
-; RV32IZFH-NEXT:    lw a0, 0(a0)
-; RV32IZFH-NEXT:    fcvt.h.wu fa0, a0
-; RV32IZFH-NEXT:    ret
-;
-; RV64IZFH-LABEL: fcvt_h_wu_load:
-; RV64IZFH:       # %bb.0:
-; RV64IZFH-NEXT:    lwu a0, 0(a0)
-; RV64IZFH-NEXT:    fcvt.h.wu fa0, a0
-; RV64IZFH-NEXT:    ret
+; CHECKIZFH-LABEL: fcvt_h_wu_load:
+; CHECKIZFH:       # %bb.0:
+; CHECKIZFH-NEXT:    lw a0, 0(a0)
+; CHECKIZFH-NEXT:    fcvt.h.wu fa0, a0
+; CHECKIZFH-NEXT:    ret
 ;
 ; RV32IDZFH-LABEL: fcvt_h_wu_load:
 ; RV32IDZFH:       # %bb.0:
@@ -4408,33 +4402,21 @@ define half @fcvt_h_wu_load(ptr %p) nounwind {
 ;
 ; RV64IDZFH-LABEL: fcvt_h_wu_load:
 ; RV64IDZFH:       # %bb.0:
-; RV64IDZFH-NEXT:    lwu a0, 0(a0)
+; RV64IDZFH-NEXT:    lw a0, 0(a0)
 ; RV64IDZFH-NEXT:    fcvt.h.wu fa0, a0
 ; RV64IDZFH-NEXT:    ret
 ;
-; RV32IZHINX-LABEL: fcvt_h_wu_load:
-; RV32IZHINX:       # %bb.0:
-; RV32IZHINX-NEXT:    lw a0, 0(a0)
-; RV32IZHINX-NEXT:    fcvt.h.wu a0, a0
-; RV32IZHINX-NEXT:    ret
-;
-; RV64IZHINX-LABEL: fcvt_h_wu_load:
-; RV64IZHINX:       # %bb.0:
-; RV64IZHINX-NEXT:    lwu a0, 0(a0)
-; RV64IZHINX-NEXT:    fcvt.h.wu a0, a0
-; RV64IZHINX-NEXT:    ret
-;
-; RV32IZDINXZHINX-LABEL: fcvt_h_wu_load:
-; RV32IZDINXZHINX:       # %bb.0:
-; RV32IZDINXZHINX-NEXT:    lw a0, 0(a0)
-; RV32IZDINXZHINX-NEXT:    fcvt.h.wu a0, a0
-; RV32IZDINXZHINX-NEXT:    ret
+; CHECKIZHINX-LABEL: fcvt_h_wu_load:
+; CHECKIZHINX:       # %bb.0:
+; CHECKIZHINX-NEXT:    lw a0, 0(a0)
+; CHECKIZHINX-NEXT:    fcvt.h.wu a0, a0
+; CHECKIZHINX-NEXT:    ret
 ;
-; RV64IZDINXZHINX-LABEL: fcvt_h_wu_load:
-; RV64IZDINXZHINX:       # %bb.0:
-; RV64IZDINXZHINX-NEXT:    lwu a0, 0(a0)
-; RV64IZDINXZHINX-NEXT:    fcvt.h.wu a0, a0
-; RV64IZDINXZHINX-NEXT:    ret
+; CHECKIZDINXZHINX-LABEL: fcvt_h_wu_load:
+; CHECKIZDINXZHINX:       # %bb.0:
+; CHECKIZDINXZHINX-NEXT:    lw a0, 0(a0)
+; CHECKIZDINXZHINX-NEXT:    fcvt.h.wu a0, a0
+; CHECKIZDINXZHINX-NEXT:    ret
 ;
 ; RV32I-LABEL: fcvt_h_wu_load:
 ; RV32I:       # %bb.0:
@@ -4476,7 +4458,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind {
 ; RV64ID-LP64:       # %bb.0:
 ; RV64ID-LP64-NEXT:    addi sp, sp, -16
 ; RV64ID-LP64-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
-; RV64ID-LP64-NEXT:    lwu a0, 0(a0)
+; RV64ID-LP64-NEXT:    lw a0, 0(a0)
 ; RV64ID-LP64-NEXT:    fcvt.s.wu fa5, a0
 ; RV64ID-LP64-NEXT:    fmv.x.w a0, fa5
 ; RV64ID-LP64-NEXT:    call __truncsfhf2
@@ -4505,7 +4487,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind {
 ; RV64ID:       # %bb.0:
 ; RV64ID-NEXT:    addi sp, sp, -16
 ; RV64ID-NEXT:    sd ra, 8(sp) # 8-byte Folded Spill
-; RV64ID-NEXT:    lwu a0, 0(a0)
+; RV64ID-NEXT:    lw a0, 0(a0)
 ; RV64ID-NEXT:    fcvt.s.wu fa0, a0
 ; RV64ID-NEXT:    call __truncsfhf2
 ; RV64ID-NEXT:    fmv.x.w a0, fa0
@@ -4525,7 +4507,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind {
 ;
 ; CHECK64-IZFHMIN-LABEL: fcvt_h_wu_load:
 ; CHECK64-IZFHMIN:       # %bb.0:
-; CHECK64-IZFHMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64-IZFHMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64-IZFHMIN-NEXT:    fcvt.s.wu fa5, a0
 ; CHECK64-IZFHMIN-NEXT:    fcvt.h.s fa0, fa5
 ; CHECK64-IZFHMIN-NEXT:    ret
@@ -4539,7 +4521,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind {
 ;
 ; CHECK64-IZHINXMIN-LABEL: fcvt_h_wu_load:
 ; CHECK64-IZHINXMIN:       # %bb.0:
-; CHECK64-IZHINXMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64-IZHINXMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64-IZHINXMIN-NEXT:    fcvt.s.wu a0, a0
 ; CHECK64-IZHINXMIN-NEXT:    fcvt.h.s a0, a0
 ; CHECK64-IZHINXMIN-NEXT:    ret
@@ -4553,7 +4535,7 @@ define half @fcvt_h_wu_load(ptr %p) nounwind {
 ;
 ; CHECK64-IZDINXZHINXMIN-LABEL: fcvt_h_wu_load:
 ; CHECK64-IZDINXZHINXMIN:       # %bb.0:
-; CHECK64-IZDINXZHINXMIN-NEXT:    lwu a0, 0(a0)
+; CHECK64-IZDINXZHINXMIN-NEXT:    lw a0, 0(a0)
 ; CHECK64-IZDINXZHINXMIN-NEXT:    fcvt.s.wu a0, a0
 ; CHECK64-IZDINXZHINXMIN-NEXT:    fcvt.h.s a0, a0
 ; CHECK64-IZDINXZHINXMIN-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/hoist-global-addr-base.ll b/llvm/test/CodeGen/RISCV/hoist-global-addr-base.ll
index 77efffb6358af..9ec1d071e7a66 100644
--- a/llvm/test/CodeGen/RISCV/hoist-global-addr-base.ll
+++ b/llvm/test/CodeGen/RISCV/hoist-global-addr-base.ll
@@ -367,13 +367,21 @@ define void @store_sh3add() {
 }
 
 define dso_local void @rmw_addi_addi() nounwind {
-; CHECK-LABEL: rmw_addi_addi:
-; CHECK:       # %bb.0: # %entry
-; CHECK-NEXT:    lui a0, %hi(bar+3211)
-; CHECK-NEXT:    lbu a1, %lo(bar+3211)(a0)
-; CHECK-NEXT:    addi a1, a1, 10
-; CHECK-NEXT:    sb a1, %lo(bar+3211)(a0)
-; CHECK-NEXT:    ret
+; RV32-LABEL: rmw_addi_addi:
+; RV32:       # %bb.0: # %entry
+; RV32-NEXT:    lui a0, %hi(bar+3211)
+; RV32-NEXT:    lbu a1, %lo(bar+3211)(a0)
+; RV32-NEXT:    addi a1, a1, 10
+; RV32-NEXT:    sb a1, %lo(bar+3211)(a0)
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: rmw_addi_addi:
+; RV64:       # %bb.0: # %entry
+; RV64-NEXT:    lui a0, %hi(bar+3211)
+; RV64-NEXT:    lb a1, %lo(bar+3211)(a0)
+; RV64-NEXT:    addi a1, a1, 10
+; RV64-NEXT:    sb a1, %lo(bar+3211)(a0)
+; RV64-NEXT:    ret
 entry:
   %0 = load i8, ptr getelementptr inbounds ([0 x i8], ptr @bar, i32 0, i64 3211)
   %1 = add i8 %0, 10
diff --git a/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll b/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll
index a691fc885704d..8a21d444e27a0 100644
--- a/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll
+++ b/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll
@@ -32,8 +32,8 @@ define void @use_frame_base_reg() {
 ; RV64I-NEXT:    lui a0, 24
 ; RV64I-NEXT:    addi a0, a0, 1704
 ; RV64I-NEXT:    add a0, sp, a0
-; RV64I-NEXT:    lbu zero, 4(a0)
-; RV64I-NEXT:    lbu zero, 0(a0)
+; RV64I-NEXT:    lb zero, 4(a0)
+; RV64I-NEXT:    lb zero, 0(a0)
 ; RV64I-NEXT:    lui a0, 24
 ; RV64I-NEXT:    addi a0, a0, 1712
 ; RV64I-NEXT:    add sp, sp, a0
@@ -70,7 +70,7 @@ define void @load_with_offset() {
 ; RV64I-NEXT:    addi sp, sp, -464
 ; RV64I-NEXT:    .cfi_def_cfa_offset 2512
 ; RV64I-NEXT:    addi a0, sp, 2012
-; RV64I-NEXT:    lbu a1, 0(a0)
+; RV64I-NEXT:    lb a1, 0(a0)
 ; RV64I-NEXT:    sb a1, 0(a0)
 ; RV64I-NEXT:    addi sp, sp, 2032
 ; RV64I-NEXT:    addi sp, sp, 480
@@ -104,7 +104,7 @@ define void @load_with_offset2() {
 ; RV64I-NEXT:    addi sp, sp, -2048
 ; RV64I-NEXT:    addi sp, sp, -464
 ; RV64I-NEXT:    .cfi_def_cfa_offset 2512
-; RV64I-NEXT:    lbu a0, 1412(sp)
+; RV64I-NEXT:    lb a0, 1412(sp)
 ; RV64I-NEXT:    sb a0, 1412(sp)
 ; RV64I-NEXT:    addi sp, sp, 2032
 ; RV64I-NEXT:    addi sp, sp, 480
@@ -154,7 +154,7 @@ define void @frame_pointer() "frame-pointer"="all" {
 ; RV64I-NEXT:    .cfi_def_cfa s0, 0
 ; RV64I-NEXT:    addi sp, sp, -496
 ; RV64I-NEXT:    addi a0, s0, -1972
-; RV64I-NEXT:    lbu a1, 0(a0)
+; RV64I-NEXT:    lb a1, 0(a0)
 ; RV64I-NEXT:    sb a1, 0(a0)
 ; RV64I-NEXT:    addi sp, sp, 496
 ; RV64I-NEXT:    .cfi_def_cfa sp, 2032
diff --git a/llvm/test/CodeGen/RISCV/mem64.ll b/llvm/test/CodeGen/RISCV/mem64.ll
index 248964146325a..b204899f1ec4b 100644
--- a/llvm/test/CodeGen/RISCV/mem64.ll
+++ b/llvm/test/CodeGen/RISCV/mem64.ll
@@ -8,7 +8,7 @@ define dso_local i64 @lb(ptr %a) nounwind {
 ; RV64I-LABEL: lb:
 ; RV64I:       # %bb.0:
 ; RV64I-NEXT:    lb a1, 1(a0)
-; RV64I-NEXT:    lbu zero, 0(a0)
+; RV64I-NEXT:    lb zero, 0(a0)
 ; RV64I-NEXT:    mv a0, a1
 ; RV64I-NEXT:    ret
   %1 = getelementptr i8, ptr %a, i32 1
@@ -168,7 +168,7 @@ define dso_local i64 @load_sext_zext_anyext_i1(ptr %a) nounwind {
 ; RV64I:       # %bb.0:
 ; RV64I-NEXT:    lbu a1, 1(a0)
 ; RV64I-NEXT:    lbu a2, 2(a0)
-; RV64I-NEXT:    lbu zero, 0(a0)
+; RV64I-NEXT:    lb zero, 0(a0)
 ; RV64I-NEXT:    sub a0, a2, a1
 ; RV64I-NEXT:    ret
   ; sextload i1
@@ -190,7 +190,7 @@ define dso_local i16 @load_sext_zext_anyext_i1_i16(ptr %a) nounwind {
 ; RV64I:       # %bb.0:
 ; RV64I-NEXT:    lbu a1, 1(a0)
 ; RV64I-NEXT:    lbu a2, 2(a0)
-; RV64I-NEXT:    lbu zero, 0(a0)
+; RV64I-NEXT:    lb zero, 0(a0)
 ; RV64I-NEXT:    sub a0, a2, a1
 ; RV64I-NEXT:    ret
   ; sextload i1
diff --git a/llvm/test/CodeGen/RISCV/memcmp-optsize.ll b/llvm/test/CodeGen/RISCV/memcmp-optsize.ll
index 0d57e4201512e..53ff2a978708c 100644
--- a/llvm/test/CodeGen/RISCV/memcmp-optsize.ll
+++ b/llvm/test/CodeGen/RISCV/memcmp-optsize.ll
@@ -3780,9 +3780,9 @@ define i32 @memcmp_size_5(ptr %s1, ptr %s2) nounwind optsize {
 ;
 ; CHECK-UNALIGNED-RV64-ZBKB-LABEL: memcmp_size_5:
 ; CHECK-UNALIGNED-RV64-ZBKB:       # %bb.0: # %entry
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a2, 0(a0)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a2, 0(a0)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lbu a0, 4(a0)
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a3, 0(a1)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a3, 0(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lbu a1, 4(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a0, a2, a0
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a1, a3, a1
@@ -3985,9 +3985,9 @@ define i32 @memcmp_size_6(ptr %s1, ptr %s2) nounwind optsize {
 ;
 ; CHECK-UNALIGNED-RV64-ZBKB-LABEL: memcmp_size_6:
 ; CHECK-UNALIGNED-RV64-ZBKB:       # %bb.0: # %entry
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a2, 0(a0)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a2, 0(a0)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lhu a0, 4(a0)
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a3, 0(a1)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a3, 0(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lhu a1, 4(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a0, a2, a0
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a1, a3, a1
@@ -5444,12 +5444,12 @@ define i1 @memcmp_eq_zero(ptr %s1, ptr %s2) nounwind optsize {
 ;
 ; CHECK-ALIGNED-RV64-ZBKB-LABEL: memcmp_eq_zero:
 ; CHECK-ALIGNED-RV64-ZBKB:       # %bb.0: # %entry
-; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lbu a2, 0(a1)
-; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lbu a3, 1(a1)
+; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lb a2, 0(a1)
+; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lb a3, 1(a1)
 ; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lbu a4, 2(a1)
 ; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lb a1, 3(a1)
-; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lbu a5, 0(a0)
-; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lbu a6, 1(a0)
+; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lb a5, 0(a0)
+; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lb a6, 1(a0)
 ; CHECK-ALIGNED-RV64-ZBKB-NEXT:    packh a2, a2, a3
 ; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lbu a3, 2(a0)
 ; CHECK-ALIGNED-RV64-ZBKB-NEXT:    lb a0, 3(a0)
diff --git a/llvm/test/CodeGen/RISCV/memcmp.ll b/llvm/test/CodeGen/RISCV/memcmp.ll
index 0caab1f5ce2f0..a5bdb13d37fb8 100644
--- a/llvm/test/CodeGen/RISCV/memcmp.ll
+++ b/llvm/test/CodeGen/RISCV/memcmp.ll
@@ -4410,9 +4410,9 @@ define i32 @memcmp_size_5(ptr %s1, ptr %s2) nounwind {
 ;
 ; CHECK-UNALIGNED-RV64-ZBKB-LABEL: memcmp_size_5:
 ; CHECK-UNALIGNED-RV64-ZBKB:       # %bb.0: # %entry
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a2, 0(a0)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a2, 0(a0)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lbu a0, 4(a0)
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a3, 0(a1)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a3, 0(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lbu a1, 4(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a0, a2, a0
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a1, a3, a1
@@ -4615,9 +4615,9 @@ define i32 @memcmp_size_6(ptr %s1, ptr %s2) nounwind {
 ;
 ; CHECK-UNALIGNED-RV64-ZBKB-LABEL: memcmp_size_6:
 ; CHECK-UNALIGNED-RV64-ZBKB:       # %bb.0: # %entry
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a2, 0(a0)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a2, 0(a0)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lhu a0, 4(a0)
-; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lwu a3, 0(a1)
+; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lw a3, 0(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    lhu a1, 4(a1)
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a0, a2, a0
 ; CHECK-UNALIGNED-RV64-ZBKB-NEXT:    pack a1, a3, a1
diff --git a/llvm/test/CodeGen/RISCV/memcpy-inline.ll b/llvm/test/CodeGen/RISCV/memcpy-inline.ll
index 833e07351eec7..d1517f2b597da 100644
--- a/llvm/test/CodeGen/RISCV/memcpy-inline.ll
+++ b/llvm/test/CodeGen/RISCV/memcpy-inline.ll
@@ -33,7 +33,7 @@ define void @unaligned_memcpy1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: unaligned_memcpy1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -52,9 +52,9 @@ define void @unaligned_memcpy2(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy2:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -87,11 +87,11 @@ define void @unaligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy3:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -105,7 +105,7 @@ define void @unaligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-FAST-LABEL: unaligned_memcpy3:
 ; RV64-FAST:       # %bb.0: # %entry
-; RV64-FAST-NEXT:    lbu a2, 2(a1)
+; RV64-FAST-NEXT:    lb a2, 2(a1)
 ; RV64-FAST-NEXT:    sb a2, 2(a0)
 ; RV64-FAST-NEXT:    lh a1, 0(a1)
 ; RV64-FAST-NEXT:    sh a1, 0(a0)
@@ -130,13 +130,13 @@ define void @unaligned_memcpy4(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy4:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -177,19 +177,19 @@ define void @unaligned_memcpy7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -236,21 +236,21 @@ define void @unaligned_memcpy8(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy8:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 7(a1)
+; RV64-NEXT:    lb a2, 7(a1)
 ; RV64-NEXT:    sb a2, 7(a0)
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -309,35 +309,35 @@ define void @unaligned_memcpy15(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy15:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
-; RV64-NEXT:    lbu a2, 13(a1)
+; RV64-NEXT:    lb a2, 13(a1)
 ; RV64-NEXT:    sb a2, 13(a0)
-; RV64-NEXT:    lbu a2, 12(a1)
+; RV64-NEXT:    lb a2, 12(a1)
 ; RV64-NEXT:    sb a2, 12(a0)
-; RV64-NEXT:    lbu a2, 11(a1)
+; RV64-NEXT:    lb a2, 11(a1)
 ; RV64-NEXT:    sb a2, 11(a0)
-; RV64-NEXT:    lbu a2, 10(a1)
+; RV64-NEXT:    lb a2, 10(a1)
 ; RV64-NEXT:    sb a2, 10(a0)
-; RV64-NEXT:    lbu a2, 9(a1)
+; RV64-NEXT:    lb a2, 9(a1)
 ; RV64-NEXT:    sb a2, 9(a0)
-; RV64-NEXT:    lbu a2, 8(a1)
+; RV64-NEXT:    lb a2, 8(a1)
 ; RV64-NEXT:    sb a2, 8(a0)
-; RV64-NEXT:    lbu a2, 7(a1)
+; RV64-NEXT:    lb a2, 7(a1)
 ; RV64-NEXT:    sb a2, 7(a0)
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -404,37 +404,37 @@ define void @unaligned_memcpy16(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy16:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 15(a1)
+; RV64-NEXT:    lb a2, 15(a1)
 ; RV64-NEXT:    sb a2, 15(a0)
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
-; RV64-NEXT:    lbu a2, 13(a1)
+; RV64-NEXT:    lb a2, 13(a1)
 ; RV64-NEXT:    sb a2, 13(a0)
-; RV64-NEXT:    lbu a2, 12(a1)
+; RV64-NEXT:    lb a2, 12(a1)
 ; RV64-NEXT:    sb a2, 12(a0)
-; RV64-NEXT:    lbu a2, 11(a1)
+; RV64-NEXT:    lb a2, 11(a1)
 ; RV64-NEXT:    sb a2, 11(a0)
-; RV64-NEXT:    lbu a2, 10(a1)
+; RV64-NEXT:    lb a2, 10(a1)
 ; RV64-NEXT:    sb a2, 10(a0)
-; RV64-NEXT:    lbu a2, 9(a1)
+; RV64-NEXT:    lb a2, 9(a1)
 ; RV64-NEXT:    sb a2, 9(a0)
-; RV64-NEXT:    lbu a2, 8(a1)
+; RV64-NEXT:    lb a2, 8(a1)
 ; RV64-NEXT:    sb a2, 8(a0)
-; RV64-NEXT:    lbu a2, 7(a1)
+; RV64-NEXT:    lb a2, 7(a1)
 ; RV64-NEXT:    sb a2, 7(a0)
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -531,67 +531,67 @@ define void @unaligned_memcpy31(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy31:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 30(a1)
+; RV64-NEXT:    lb a2, 30(a1)
 ; RV64-NEXT:    sb a2, 30(a0)
-; RV64-NEXT:    lbu a2, 29(a1)
+; RV64-NEXT:    lb a2, 29(a1)
 ; RV64-NEXT:    sb a2, 29(a0)
-; RV64-NEXT:    lbu a2, 28(a1)
+; RV64-NEXT:    lb a2, 28(a1)
 ; RV64-NEXT:    sb a2, 28(a0)
-; RV64-NEXT:    lbu a2, 27(a1)
+; RV64-NEXT:    lb a2, 27(a1)
 ; RV64-NEXT:    sb a2, 27(a0)
-; RV64-NEXT:    lbu a2, 26(a1)
+; RV64-NEXT:    lb a2, 26(a1)
 ; RV64-NEXT:    sb a2, 26(a0)
-; RV64-NEXT:    lbu a2, 25(a1)
+; RV64-NEXT:    lb a2, 25(a1)
 ; RV64-NEXT:    sb a2, 25(a0)
-; RV64-NEXT:    lbu a2, 24(a1)
+; RV64-NEXT:    lb a2, 24(a1)
 ; RV64-NEXT:    sb a2, 24(a0)
-; RV64-NEXT:    lbu a2, 23(a1)
+; RV64-NEXT:    lb a2, 23(a1)
 ; RV64-NEXT:    sb a2, 23(a0)
-; RV64-NEXT:    lbu a2, 22(a1)
+; RV64-NEXT:    lb a2, 22(a1)
 ; RV64-NEXT:    sb a2, 22(a0)
-; RV64-NEXT:    lbu a2, 21(a1)
+; RV64-NEXT:    lb a2, 21(a1)
 ; RV64-NEXT:    sb a2, 21(a0)
-; RV64-NEXT:    lbu a2, 20(a1)
+; RV64-NEXT:    lb a2, 20(a1)
 ; RV64-NEXT:    sb a2, 20(a0)
-; RV64-NEXT:    lbu a2, 19(a1)
+; RV64-NEXT:    lb a2, 19(a1)
 ; RV64-NEXT:    sb a2, 19(a0)
-; RV64-NEXT:    lbu a2, 18(a1)
+; RV64-NEXT:    lb a2, 18(a1)
 ; RV64-NEXT:    sb a2, 18(a0)
-; RV64-NEXT:    lbu a2, 17(a1)
+; RV64-NEXT:    lb a2, 17(a1)
 ; RV64-NEXT:    sb a2, 17(a0)
-; RV64-NEXT:    lbu a2, 16(a1)
+; RV64-NEXT:    lb a2, 16(a1)
 ; RV64-NEXT:    sb a2, 16(a0)
-; RV64-NEXT:    lbu a2, 15(a1)
+; RV64-NEXT:    lb a2, 15(a1)
 ; RV64-NEXT:    sb a2, 15(a0)
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
-; RV64-NEXT:    lbu a2, 13(a1)
+; RV64-NEXT:    lb a2, 13(a1)
 ; RV64-NEXT:    sb a2, 13(a0)
-; RV64-NEXT:    lbu a2, 12(a1)
+; RV64-NEXT:    lb a2, 12(a1)
 ; RV64-NEXT:    sb a2, 12(a0)
-; RV64-NEXT:    lbu a2, 11(a1)
+; RV64-NEXT:    lb a2, 11(a1)
 ; RV64-NEXT:    sb a2, 11(a0)
-; RV64-NEXT:    lbu a2, 10(a1)
+; RV64-NEXT:    lb a2, 10(a1)
 ; RV64-NEXT:    sb a2, 10(a0)
-; RV64-NEXT:    lbu a2, 9(a1)
+; RV64-NEXT:    lb a2, 9(a1)
 ; RV64-NEXT:    sb a2, 9(a0)
-; RV64-NEXT:    lbu a2, 8(a1)
+; RV64-NEXT:    lb a2, 8(a1)
 ; RV64-NEXT:    sb a2, 8(a0)
-; RV64-NEXT:    lbu a2, 7(a1)
+; RV64-NEXT:    lb a2, 7(a1)
 ; RV64-NEXT:    sb a2, 7(a0)
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -656,7 +656,7 @@ define void @aligned_memcpy1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: aligned_memcpy1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -692,7 +692,7 @@ define void @aligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: aligned_memcpy3:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a2, 2(a1)
+; RV64-BOTH-NEXT:    lb a2, 2(a1)
 ; RV64-BOTH-NEXT:    sb a2, 2(a0)
 ; RV64-BOTH-NEXT:    lh a1, 0(a1)
 ; RV64-BOTH-NEXT:    sh a1, 0(a0)
@@ -732,7 +732,7 @@ define void @aligned_memcpy7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
 ; RV64-NEXT:    lh a2, 4(a1)
 ; RV64-NEXT:    sh a2, 4(a0)
@@ -796,7 +796,7 @@ define void @aligned_memcpy15(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy15:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
 ; RV64-NEXT:    lh a2, 12(a1)
 ; RV64-NEXT:    sh a2, 12(a0)
@@ -880,7 +880,7 @@ define void @aligned_memcpy31(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy31:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 30(a1)
+; RV64-NEXT:    lb a2, 30(a1)
 ; RV64-NEXT:    sb a2, 30(a0)
 ; RV64-NEXT:    lh a2, 28(a1)
 ; RV64-NEXT:    sh a2, 28(a0)
@@ -987,7 +987,7 @@ define i32 @memcpy11_align8(ptr nocapture %dest, ptr %src) {
 ;
 ; RV64-LABEL: memcpy11_align8:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 10(a1)
+; RV64-NEXT:    lb a2, 10(a1)
 ; RV64-NEXT:    sb a2, 10(a0)
 ; RV64-NEXT:    lh a2, 8(a1)
 ; RV64-NEXT:    sh a2, 8(a0)
diff --git a/llvm/test/CodeGen/RISCV/memcpy.ll b/llvm/test/CodeGen/RISCV/memcpy.ll
index 447fc26b0106e..eba477b55c649 100644
--- a/llvm/test/CodeGen/RISCV/memcpy.ll
+++ b/llvm/test/CodeGen/RISCV/memcpy.ll
@@ -33,7 +33,7 @@ define void @unaligned_memcpy1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: unaligned_memcpy1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -52,9 +52,9 @@ define void @unaligned_memcpy2(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy2:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -87,11 +87,11 @@ define void @unaligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy3:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -105,7 +105,7 @@ define void @unaligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-FAST-LABEL: unaligned_memcpy3:
 ; RV64-FAST:       # %bb.0: # %entry
-; RV64-FAST-NEXT:    lbu a2, 2(a1)
+; RV64-FAST-NEXT:    lb a2, 2(a1)
 ; RV64-FAST-NEXT:    sb a2, 2(a0)
 ; RV64-FAST-NEXT:    lh a1, 0(a1)
 ; RV64-FAST-NEXT:    sh a1, 0(a0)
@@ -130,13 +130,13 @@ define void @unaligned_memcpy4(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy4:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -177,19 +177,19 @@ define void @unaligned_memcpy7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -236,21 +236,21 @@ define void @unaligned_memcpy8(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy8:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 7(a1)
+; RV64-NEXT:    lb a2, 7(a1)
 ; RV64-NEXT:    sb a2, 7(a0)
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
-; RV64-NEXT:    lbu a2, 5(a1)
+; RV64-NEXT:    lb a2, 5(a1)
 ; RV64-NEXT:    sb a2, 5(a0)
-; RV64-NEXT:    lbu a2, 4(a1)
+; RV64-NEXT:    lb a2, 4(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
-; RV64-NEXT:    lbu a2, 3(a1)
+; RV64-NEXT:    lb a2, 3(a1)
 ; RV64-NEXT:    sb a2, 3(a0)
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
-; RV64-NEXT:    lbu a2, 1(a1)
+; RV64-NEXT:    lb a2, 1(a1)
 ; RV64-NEXT:    sb a2, 1(a0)
-; RV64-NEXT:    lbu a1, 0(a1)
+; RV64-NEXT:    lb a1, 0(a1)
 ; RV64-NEXT:    sb a1, 0(a0)
 ; RV64-NEXT:    ret
 ;
@@ -414,7 +414,7 @@ define void @aligned_memcpy1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: aligned_memcpy1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -450,7 +450,7 @@ define void @aligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: aligned_memcpy3:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a2, 2(a1)
+; RV64-BOTH-NEXT:    lb a2, 2(a1)
 ; RV64-BOTH-NEXT:    sb a2, 2(a0)
 ; RV64-BOTH-NEXT:    lh a1, 0(a1)
 ; RV64-BOTH-NEXT:    sh a1, 0(a0)
@@ -490,7 +490,7 @@ define void @aligned_memcpy7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
 ; RV64-NEXT:    lh a2, 4(a1)
 ; RV64-NEXT:    sh a2, 4(a0)
@@ -554,7 +554,7 @@ define void @aligned_memcpy15(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy15:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
 ; RV64-NEXT:    lh a2, 12(a1)
 ; RV64-NEXT:    sh a2, 12(a0)
@@ -621,7 +621,7 @@ define void @aligned_memcpy31(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy31:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 30(a1)
+; RV64-NEXT:    lb a2, 30(a1)
 ; RV64-NEXT:    sb a2, 30(a0)
 ; RV64-NEXT:    lh a2, 28(a1)
 ; RV64-NEXT:    sh a2, 28(a0)
@@ -728,7 +728,7 @@ define i32 @memcpy11_align8(ptr nocapture %dest, ptr %src) {
 ;
 ; RV64-LABEL: memcpy11_align8:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 10(a1)
+; RV64-NEXT:    lb a2, 10(a1)
 ; RV64-NEXT:    sb a2, 10(a0)
 ; RV64-NEXT:    lh a2, 8(a1)
 ; RV64-NEXT:    sh a2, 8(a0)
diff --git a/llvm/test/CodeGen/RISCV/memmove.ll b/llvm/test/CodeGen/RISCV/memmove.ll
index 62915bd4ad99d..08a598eb224fb 100644
--- a/llvm/test/CodeGen/RISCV/memmove.ll
+++ b/llvm/test/CodeGen/RISCV/memmove.ll
@@ -33,7 +33,7 @@ define void @unaligned_memmove1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: unaligned_memmove1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -52,8 +52,8 @@ define void @unaligned_memmove2(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memmove2:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 0(a1)
-; RV64-NEXT:    lbu a1, 1(a1)
+; RV64-NEXT:    lb a2, 0(a1)
+; RV64-NEXT:    lb a1, 1(a1)
 ; RV64-NEXT:    sb a2, 0(a0)
 ; RV64-NEXT:    sb a1, 1(a0)
 ; RV64-NEXT:    ret
@@ -87,9 +87,9 @@ define void @unaligned_memmove3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memmove3:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 0(a1)
-; RV64-NEXT:    lbu a3, 1(a1)
-; RV64-NEXT:    lbu a1, 2(a1)
+; RV64-NEXT:    lb a2, 0(a1)
+; RV64-NEXT:    lb a3, 1(a1)
+; RV64-NEXT:    lb a1, 2(a1)
 ; RV64-NEXT:    sb a2, 0(a0)
 ; RV64-NEXT:    sb a3, 1(a0)
 ; RV64-NEXT:    sb a1, 2(a0)
@@ -106,7 +106,7 @@ define void @unaligned_memmove3(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-FAST-LABEL: unaligned_memmove3:
 ; RV64-FAST:       # %bb.0: # %entry
 ; RV64-FAST-NEXT:    lh a2, 0(a1)
-; RV64-FAST-NEXT:    lbu a1, 2(a1)
+; RV64-FAST-NEXT:    lb a1, 2(a1)
 ; RV64-FAST-NEXT:    sh a2, 0(a0)
 ; RV64-FAST-NEXT:    sb a1, 2(a0)
 ; RV64-FAST-NEXT:    ret
@@ -130,10 +130,10 @@ define void @unaligned_memmove4(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memmove4:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 0(a1)
-; RV64-NEXT:    lbu a3, 1(a1)
-; RV64-NEXT:    lbu a4, 2(a1)
-; RV64-NEXT:    lbu a1, 3(a1)
+; RV64-NEXT:    lb a2, 0(a1)
+; RV64-NEXT:    lb a3, 1(a1)
+; RV64-NEXT:    lb a4, 2(a1)
+; RV64-NEXT:    lb a1, 3(a1)
 ; RV64-NEXT:    sb a2, 0(a0)
 ; RV64-NEXT:    sb a3, 1(a0)
 ; RV64-NEXT:    sb a4, 2(a0)
@@ -177,13 +177,13 @@ define void @unaligned_memmove7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memmove7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 4(a1)
-; RV64-NEXT:    lbu a3, 5(a1)
-; RV64-NEXT:    lbu a4, 6(a1)
-; RV64-NEXT:    lbu a5, 0(a1)
-; RV64-NEXT:    lbu a6, 1(a1)
-; RV64-NEXT:    lbu a7, 2(a1)
-; RV64-NEXT:    lbu a1, 3(a1)
+; RV64-NEXT:    lb a2, 4(a1)
+; RV64-NEXT:    lb a3, 5(a1)
+; RV64-NEXT:    lb a4, 6(a1)
+; RV64-NEXT:    lb a5, 0(a1)
+; RV64-NEXT:    lb a6, 1(a1)
+; RV64-NEXT:    lb a7, 2(a1)
+; RV64-NEXT:    lb a1, 3(a1)
 ; RV64-NEXT:    sb a2, 4(a0)
 ; RV64-NEXT:    sb a3, 5(a0)
 ; RV64-NEXT:    sb a4, 6(a0)
@@ -207,7 +207,7 @@ define void @unaligned_memmove7(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-FAST:       # %bb.0: # %entry
 ; RV64-FAST-NEXT:    lw a2, 0(a1)
 ; RV64-FAST-NEXT:    lh a3, 4(a1)
-; RV64-FAST-NEXT:    lbu a1, 6(a1)
+; RV64-FAST-NEXT:    lb a1, 6(a1)
 ; RV64-FAST-NEXT:    sw a2, 0(a0)
 ; RV64-FAST-NEXT:    sh a3, 4(a0)
 ; RV64-FAST-NEXT:    sb a1, 6(a0)
@@ -240,14 +240,14 @@ define void @unaligned_memmove8(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memmove8:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 0(a1)
-; RV64-NEXT:    lbu a3, 1(a1)
-; RV64-NEXT:    lbu a4, 2(a1)
-; RV64-NEXT:    lbu a5, 3(a1)
-; RV64-NEXT:    lbu a6, 4(a1)
-; RV64-NEXT:    lbu a7, 5(a1)
-; RV64-NEXT:    lbu t0, 6(a1)
-; RV64-NEXT:    lbu a1, 7(a1)
+; RV64-NEXT:    lb a2, 0(a1)
+; RV64-NEXT:    lb a3, 1(a1)
+; RV64-NEXT:    lb a4, 2(a1)
+; RV64-NEXT:    lb a5, 3(a1)
+; RV64-NEXT:    lb a6, 4(a1)
+; RV64-NEXT:    lb a7, 5(a1)
+; RV64-NEXT:    lb t0, 6(a1)
+; RV64-NEXT:    lb a1, 7(a1)
 ; RV64-NEXT:    sb a6, 4(a0)
 ; RV64-NEXT:    sb a7, 5(a0)
 ; RV64-NEXT:    sb t0, 6(a0)
@@ -306,7 +306,7 @@ define void @unaligned_memmove15(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-FAST-NEXT:    ld a2, 0(a1)
 ; RV64-FAST-NEXT:    lw a3, 8(a1)
 ; RV64-FAST-NEXT:    lh a4, 12(a1)
-; RV64-FAST-NEXT:    lbu a1, 14(a1)
+; RV64-FAST-NEXT:    lb a1, 14(a1)
 ; RV64-FAST-NEXT:    sd a2, 0(a0)
 ; RV64-FAST-NEXT:    sw a3, 8(a0)
 ; RV64-FAST-NEXT:    sh a4, 12(a0)
@@ -366,7 +366,7 @@ define void @unaligned_memmove31(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-FAST-LABEL: unaligned_memmove31:
 ; RV64-FAST:       # %bb.0: # %entry
 ; RV64-FAST-NEXT:    lh a2, 28(a1)
-; RV64-FAST-NEXT:    lbu a3, 30(a1)
+; RV64-FAST-NEXT:    lb a3, 30(a1)
 ; RV64-FAST-NEXT:    ld a4, 0(a1)
 ; RV64-FAST-NEXT:    ld a5, 8(a1)
 ; RV64-FAST-NEXT:    ld a6, 16(a1)
@@ -408,7 +408,7 @@ define void @aligned_memmove1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: aligned_memmove1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -445,7 +445,7 @@ define void @aligned_memmove3(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-BOTH-LABEL: aligned_memmove3:
 ; RV64-BOTH:       # %bb.0: # %entry
 ; RV64-BOTH-NEXT:    lh a2, 0(a1)
-; RV64-BOTH-NEXT:    lbu a1, 2(a1)
+; RV64-BOTH-NEXT:    lb a1, 2(a1)
 ; RV64-BOTH-NEXT:    sh a2, 0(a0)
 ; RV64-BOTH-NEXT:    sb a1, 2(a0)
 ; RV64-BOTH-NEXT:    ret
@@ -486,7 +486,7 @@ define void @aligned_memmove7(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-BOTH:       # %bb.0: # %entry
 ; RV64-BOTH-NEXT:    lw a2, 0(a1)
 ; RV64-BOTH-NEXT:    lh a3, 4(a1)
-; RV64-BOTH-NEXT:    lbu a1, 6(a1)
+; RV64-BOTH-NEXT:    lb a1, 6(a1)
 ; RV64-BOTH-NEXT:    sw a2, 0(a0)
 ; RV64-BOTH-NEXT:    sh a3, 4(a0)
 ; RV64-BOTH-NEXT:    sb a1, 6(a0)
@@ -535,7 +535,7 @@ define void @aligned_memmove15(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-BOTH-NEXT:    ld a2, 0(a1)
 ; RV64-BOTH-NEXT:    lw a3, 8(a1)
 ; RV64-BOTH-NEXT:    lh a4, 12(a1)
-; RV64-BOTH-NEXT:    lbu a1, 14(a1)
+; RV64-BOTH-NEXT:    lb a1, 14(a1)
 ; RV64-BOTH-NEXT:    sd a2, 0(a0)
 ; RV64-BOTH-NEXT:    sw a3, 8(a0)
 ; RV64-BOTH-NEXT:    sh a4, 12(a0)
@@ -580,7 +580,7 @@ define void @aligned_memmove31(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-BOTH-LABEL: aligned_memmove31:
 ; RV64-BOTH:       # %bb.0: # %entry
 ; RV64-BOTH-NEXT:    lh a2, 28(a1)
-; RV64-BOTH-NEXT:    lbu a3, 30(a1)
+; RV64-BOTH-NEXT:    lb a3, 30(a1)
 ; RV64-BOTH-NEXT:    ld a4, 0(a1)
 ; RV64-BOTH-NEXT:    ld a5, 8(a1)
 ; RV64-BOTH-NEXT:    ld a6, 16(a1)
@@ -656,7 +656,7 @@ define i32 @memmove11_align8(ptr nocapture %dest, ptr %src) {
 ; RV64-BOTH:       # %bb.0: # %entry
 ; RV64-BOTH-NEXT:    ld a2, 0(a1)
 ; RV64-BOTH-NEXT:    lh a3, 8(a1)
-; RV64-BOTH-NEXT:    lbu a1, 10(a1)
+; RV64-BOTH-NEXT:    lb a1, 10(a1)
 ; RV64-BOTH-NEXT:    sd a2, 0(a0)
 ; RV64-BOTH-NEXT:    sh a3, 8(a0)
 ; RV64-BOTH-NEXT:    sb a1, 10(a0)
diff --git a/llvm/test/CodeGen/RISCV/nontemporal.ll b/llvm/test/CodeGen/RISCV/nontemporal.ll
index 18e1f5b49032b..557b531892c17 100644
--- a/llvm/test/CodeGen/RISCV/nontemporal.ll
+++ b/llvm/test/CodeGen/RISCV/nontemporal.ll
@@ -907,22 +907,22 @@ define void @test_nontemporal_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64-NEXT:    lbu a2, 0(a1)
-; CHECK-RV64-NEXT:    lbu a3, 8(a1)
-; CHECK-RV64-NEXT:    lbu a4, 16(a1)
-; CHECK-RV64-NEXT:    lbu a5, 24(a1)
-; CHECK-RV64-NEXT:    lbu a6, 32(a1)
-; CHECK-RV64-NEXT:    lbu a7, 40(a1)
-; CHECK-RV64-NEXT:    lbu t0, 48(a1)
-; CHECK-RV64-NEXT:    lbu t1, 56(a1)
-; CHECK-RV64-NEXT:    lbu t2, 64(a1)
-; CHECK-RV64-NEXT:    lbu t3, 72(a1)
-; CHECK-RV64-NEXT:    lbu t4, 80(a1)
-; CHECK-RV64-NEXT:    lbu t5, 88(a1)
-; CHECK-RV64-NEXT:    lbu t6, 96(a1)
-; CHECK-RV64-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64-NEXT:    lb a2, 0(a1)
+; CHECK-RV64-NEXT:    lb a3, 8(a1)
+; CHECK-RV64-NEXT:    lb a4, 16(a1)
+; CHECK-RV64-NEXT:    lb a5, 24(a1)
+; CHECK-RV64-NEXT:    lb a6, 32(a1)
+; CHECK-RV64-NEXT:    lb a7, 40(a1)
+; CHECK-RV64-NEXT:    lb t0, 48(a1)
+; CHECK-RV64-NEXT:    lb t1, 56(a1)
+; CHECK-RV64-NEXT:    lb t2, 64(a1)
+; CHECK-RV64-NEXT:    lb t3, 72(a1)
+; CHECK-RV64-NEXT:    lb t4, 80(a1)
+; CHECK-RV64-NEXT:    lb t5, 88(a1)
+; CHECK-RV64-NEXT:    lb t6, 96(a1)
+; CHECK-RV64-NEXT:    lb s0, 104(a1)
+; CHECK-RV64-NEXT:    lb s1, 112(a1)
+; CHECK-RV64-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64-NEXT:    ntl.all
 ; CHECK-RV64-NEXT:    sb t6, 12(a0)
 ; CHECK-RV64-NEXT:    ntl.all
@@ -1035,22 +1035,22 @@ define void @test_nontemporal_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64C-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64C-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64C-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64C-NEXT:    lbu a6, 0(a1)
-; CHECK-RV64C-NEXT:    lbu a7, 8(a1)
-; CHECK-RV64C-NEXT:    lbu t0, 16(a1)
-; CHECK-RV64C-NEXT:    lbu t1, 24(a1)
-; CHECK-RV64C-NEXT:    lbu t2, 32(a1)
-; CHECK-RV64C-NEXT:    lbu t3, 40(a1)
-; CHECK-RV64C-NEXT:    lbu t4, 48(a1)
-; CHECK-RV64C-NEXT:    lbu t5, 56(a1)
-; CHECK-RV64C-NEXT:    lbu t6, 64(a1)
-; CHECK-RV64C-NEXT:    lbu a3, 72(a1)
-; CHECK-RV64C-NEXT:    lbu a4, 80(a1)
-; CHECK-RV64C-NEXT:    lbu a5, 88(a1)
-; CHECK-RV64C-NEXT:    lbu a2, 96(a1)
-; CHECK-RV64C-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64C-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64C-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64C-NEXT:    lb a6, 0(a1)
+; CHECK-RV64C-NEXT:    lb a7, 8(a1)
+; CHECK-RV64C-NEXT:    lb t0, 16(a1)
+; CHECK-RV64C-NEXT:    lb t1, 24(a1)
+; CHECK-RV64C-NEXT:    lb t2, 32(a1)
+; CHECK-RV64C-NEXT:    lb t3, 40(a1)
+; CHECK-RV64C-NEXT:    lb t4, 48(a1)
+; CHECK-RV64C-NEXT:    lb t5, 56(a1)
+; CHECK-RV64C-NEXT:    lb t6, 64(a1)
+; CHECK-RV64C-NEXT:    lb a3, 72(a1)
+; CHECK-RV64C-NEXT:    lb a4, 80(a1)
+; CHECK-RV64C-NEXT:    lb a5, 88(a1)
+; CHECK-RV64C-NEXT:    lb a2, 96(a1)
+; CHECK-RV64C-NEXT:    lb s0, 104(a1)
+; CHECK-RV64C-NEXT:    lb s1, 112(a1)
+; CHECK-RV64C-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64C-NEXT:    c.ntl.all
 ; CHECK-RV64C-NEXT:    sb a2, 12(a0)
 ; CHECK-RV64C-NEXT:    c.ntl.all
@@ -2333,22 +2333,22 @@ define void @test_nontemporal_P1_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64-NEXT:    lbu a2, 0(a1)
-; CHECK-RV64-NEXT:    lbu a3, 8(a1)
-; CHECK-RV64-NEXT:    lbu a4, 16(a1)
-; CHECK-RV64-NEXT:    lbu a5, 24(a1)
-; CHECK-RV64-NEXT:    lbu a6, 32(a1)
-; CHECK-RV64-NEXT:    lbu a7, 40(a1)
-; CHECK-RV64-NEXT:    lbu t0, 48(a1)
-; CHECK-RV64-NEXT:    lbu t1, 56(a1)
-; CHECK-RV64-NEXT:    lbu t2, 64(a1)
-; CHECK-RV64-NEXT:    lbu t3, 72(a1)
-; CHECK-RV64-NEXT:    lbu t4, 80(a1)
-; CHECK-RV64-NEXT:    lbu t5, 88(a1)
-; CHECK-RV64-NEXT:    lbu t6, 96(a1)
-; CHECK-RV64-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64-NEXT:    lb a2, 0(a1)
+; CHECK-RV64-NEXT:    lb a3, 8(a1)
+; CHECK-RV64-NEXT:    lb a4, 16(a1)
+; CHECK-RV64-NEXT:    lb a5, 24(a1)
+; CHECK-RV64-NEXT:    lb a6, 32(a1)
+; CHECK-RV64-NEXT:    lb a7, 40(a1)
+; CHECK-RV64-NEXT:    lb t0, 48(a1)
+; CHECK-RV64-NEXT:    lb t1, 56(a1)
+; CHECK-RV64-NEXT:    lb t2, 64(a1)
+; CHECK-RV64-NEXT:    lb t3, 72(a1)
+; CHECK-RV64-NEXT:    lb t4, 80(a1)
+; CHECK-RV64-NEXT:    lb t5, 88(a1)
+; CHECK-RV64-NEXT:    lb t6, 96(a1)
+; CHECK-RV64-NEXT:    lb s0, 104(a1)
+; CHECK-RV64-NEXT:    lb s1, 112(a1)
+; CHECK-RV64-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64-NEXT:    ntl.p1
 ; CHECK-RV64-NEXT:    sb t6, 12(a0)
 ; CHECK-RV64-NEXT:    ntl.p1
@@ -2461,22 +2461,22 @@ define void @test_nontemporal_P1_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64C-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64C-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64C-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64C-NEXT:    lbu a6, 0(a1)
-; CHECK-RV64C-NEXT:    lbu a7, 8(a1)
-; CHECK-RV64C-NEXT:    lbu t0, 16(a1)
-; CHECK-RV64C-NEXT:    lbu t1, 24(a1)
-; CHECK-RV64C-NEXT:    lbu t2, 32(a1)
-; CHECK-RV64C-NEXT:    lbu t3, 40(a1)
-; CHECK-RV64C-NEXT:    lbu t4, 48(a1)
-; CHECK-RV64C-NEXT:    lbu t5, 56(a1)
-; CHECK-RV64C-NEXT:    lbu t6, 64(a1)
-; CHECK-RV64C-NEXT:    lbu a3, 72(a1)
-; CHECK-RV64C-NEXT:    lbu a4, 80(a1)
-; CHECK-RV64C-NEXT:    lbu a5, 88(a1)
-; CHECK-RV64C-NEXT:    lbu a2, 96(a1)
-; CHECK-RV64C-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64C-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64C-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64C-NEXT:    lb a6, 0(a1)
+; CHECK-RV64C-NEXT:    lb a7, 8(a1)
+; CHECK-RV64C-NEXT:    lb t0, 16(a1)
+; CHECK-RV64C-NEXT:    lb t1, 24(a1)
+; CHECK-RV64C-NEXT:    lb t2, 32(a1)
+; CHECK-RV64C-NEXT:    lb t3, 40(a1)
+; CHECK-RV64C-NEXT:    lb t4, 48(a1)
+; CHECK-RV64C-NEXT:    lb t5, 56(a1)
+; CHECK-RV64C-NEXT:    lb t6, 64(a1)
+; CHECK-RV64C-NEXT:    lb a3, 72(a1)
+; CHECK-RV64C-NEXT:    lb a4, 80(a1)
+; CHECK-RV64C-NEXT:    lb a5, 88(a1)
+; CHECK-RV64C-NEXT:    lb a2, 96(a1)
+; CHECK-RV64C-NEXT:    lb s0, 104(a1)
+; CHECK-RV64C-NEXT:    lb s1, 112(a1)
+; CHECK-RV64C-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64C-NEXT:    c.ntl.p1
 ; CHECK-RV64C-NEXT:    sb a2, 12(a0)
 ; CHECK-RV64C-NEXT:    c.ntl.p1
@@ -3759,22 +3759,22 @@ define void @test_nontemporal_PALL_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64-NEXT:    lbu a2, 0(a1)
-; CHECK-RV64-NEXT:    lbu a3, 8(a1)
-; CHECK-RV64-NEXT:    lbu a4, 16(a1)
-; CHECK-RV64-NEXT:    lbu a5, 24(a1)
-; CHECK-RV64-NEXT:    lbu a6, 32(a1)
-; CHECK-RV64-NEXT:    lbu a7, 40(a1)
-; CHECK-RV64-NEXT:    lbu t0, 48(a1)
-; CHECK-RV64-NEXT:    lbu t1, 56(a1)
-; CHECK-RV64-NEXT:    lbu t2, 64(a1)
-; CHECK-RV64-NEXT:    lbu t3, 72(a1)
-; CHECK-RV64-NEXT:    lbu t4, 80(a1)
-; CHECK-RV64-NEXT:    lbu t5, 88(a1)
-; CHECK-RV64-NEXT:    lbu t6, 96(a1)
-; CHECK-RV64-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64-NEXT:    lb a2, 0(a1)
+; CHECK-RV64-NEXT:    lb a3, 8(a1)
+; CHECK-RV64-NEXT:    lb a4, 16(a1)
+; CHECK-RV64-NEXT:    lb a5, 24(a1)
+; CHECK-RV64-NEXT:    lb a6, 32(a1)
+; CHECK-RV64-NEXT:    lb a7, 40(a1)
+; CHECK-RV64-NEXT:    lb t0, 48(a1)
+; CHECK-RV64-NEXT:    lb t1, 56(a1)
+; CHECK-RV64-NEXT:    lb t2, 64(a1)
+; CHECK-RV64-NEXT:    lb t3, 72(a1)
+; CHECK-RV64-NEXT:    lb t4, 80(a1)
+; CHECK-RV64-NEXT:    lb t5, 88(a1)
+; CHECK-RV64-NEXT:    lb t6, 96(a1)
+; CHECK-RV64-NEXT:    lb s0, 104(a1)
+; CHECK-RV64-NEXT:    lb s1, 112(a1)
+; CHECK-RV64-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64-NEXT:    ntl.pall
 ; CHECK-RV64-NEXT:    sb t6, 12(a0)
 ; CHECK-RV64-NEXT:    ntl.pall
@@ -3887,22 +3887,22 @@ define void @test_nontemporal_PALL_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64C-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64C-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64C-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64C-NEXT:    lbu a6, 0(a1)
-; CHECK-RV64C-NEXT:    lbu a7, 8(a1)
-; CHECK-RV64C-NEXT:    lbu t0, 16(a1)
-; CHECK-RV64C-NEXT:    lbu t1, 24(a1)
-; CHECK-RV64C-NEXT:    lbu t2, 32(a1)
-; CHECK-RV64C-NEXT:    lbu t3, 40(a1)
-; CHECK-RV64C-NEXT:    lbu t4, 48(a1)
-; CHECK-RV64C-NEXT:    lbu t5, 56(a1)
-; CHECK-RV64C-NEXT:    lbu t6, 64(a1)
-; CHECK-RV64C-NEXT:    lbu a3, 72(a1)
-; CHECK-RV64C-NEXT:    lbu a4, 80(a1)
-; CHECK-RV64C-NEXT:    lbu a5, 88(a1)
-; CHECK-RV64C-NEXT:    lbu a2, 96(a1)
-; CHECK-RV64C-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64C-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64C-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64C-NEXT:    lb a6, 0(a1)
+; CHECK-RV64C-NEXT:    lb a7, 8(a1)
+; CHECK-RV64C-NEXT:    lb t0, 16(a1)
+; CHECK-RV64C-NEXT:    lb t1, 24(a1)
+; CHECK-RV64C-NEXT:    lb t2, 32(a1)
+; CHECK-RV64C-NEXT:    lb t3, 40(a1)
+; CHECK-RV64C-NEXT:    lb t4, 48(a1)
+; CHECK-RV64C-NEXT:    lb t5, 56(a1)
+; CHECK-RV64C-NEXT:    lb t6, 64(a1)
+; CHECK-RV64C-NEXT:    lb a3, 72(a1)
+; CHECK-RV64C-NEXT:    lb a4, 80(a1)
+; CHECK-RV64C-NEXT:    lb a5, 88(a1)
+; CHECK-RV64C-NEXT:    lb a2, 96(a1)
+; CHECK-RV64C-NEXT:    lb s0, 104(a1)
+; CHECK-RV64C-NEXT:    lb s1, 112(a1)
+; CHECK-RV64C-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64C-NEXT:    c.ntl.pall
 ; CHECK-RV64C-NEXT:    sb a2, 12(a0)
 ; CHECK-RV64C-NEXT:    c.ntl.pall
@@ -5185,22 +5185,22 @@ define void @test_nontemporal_S1_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64-NEXT:    lbu a2, 0(a1)
-; CHECK-RV64-NEXT:    lbu a3, 8(a1)
-; CHECK-RV64-NEXT:    lbu a4, 16(a1)
-; CHECK-RV64-NEXT:    lbu a5, 24(a1)
-; CHECK-RV64-NEXT:    lbu a6, 32(a1)
-; CHECK-RV64-NEXT:    lbu a7, 40(a1)
-; CHECK-RV64-NEXT:    lbu t0, 48(a1)
-; CHECK-RV64-NEXT:    lbu t1, 56(a1)
-; CHECK-RV64-NEXT:    lbu t2, 64(a1)
-; CHECK-RV64-NEXT:    lbu t3, 72(a1)
-; CHECK-RV64-NEXT:    lbu t4, 80(a1)
-; CHECK-RV64-NEXT:    lbu t5, 88(a1)
-; CHECK-RV64-NEXT:    lbu t6, 96(a1)
-; CHECK-RV64-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64-NEXT:    lb a2, 0(a1)
+; CHECK-RV64-NEXT:    lb a3, 8(a1)
+; CHECK-RV64-NEXT:    lb a4, 16(a1)
+; CHECK-RV64-NEXT:    lb a5, 24(a1)
+; CHECK-RV64-NEXT:    lb a6, 32(a1)
+; CHECK-RV64-NEXT:    lb a7, 40(a1)
+; CHECK-RV64-NEXT:    lb t0, 48(a1)
+; CHECK-RV64-NEXT:    lb t1, 56(a1)
+; CHECK-RV64-NEXT:    lb t2, 64(a1)
+; CHECK-RV64-NEXT:    lb t3, 72(a1)
+; CHECK-RV64-NEXT:    lb t4, 80(a1)
+; CHECK-RV64-NEXT:    lb t5, 88(a1)
+; CHECK-RV64-NEXT:    lb t6, 96(a1)
+; CHECK-RV64-NEXT:    lb s0, 104(a1)
+; CHECK-RV64-NEXT:    lb s1, 112(a1)
+; CHECK-RV64-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64-NEXT:    ntl.s1
 ; CHECK-RV64-NEXT:    sb t6, 12(a0)
 ; CHECK-RV64-NEXT:    ntl.s1
@@ -5313,22 +5313,22 @@ define void @test_nontemporal_S1_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64C-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64C-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64C-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64C-NEXT:    lbu a6, 0(a1)
-; CHECK-RV64C-NEXT:    lbu a7, 8(a1)
-; CHECK-RV64C-NEXT:    lbu t0, 16(a1)
-; CHECK-RV64C-NEXT:    lbu t1, 24(a1)
-; CHECK-RV64C-NEXT:    lbu t2, 32(a1)
-; CHECK-RV64C-NEXT:    lbu t3, 40(a1)
-; CHECK-RV64C-NEXT:    lbu t4, 48(a1)
-; CHECK-RV64C-NEXT:    lbu t5, 56(a1)
-; CHECK-RV64C-NEXT:    lbu t6, 64(a1)
-; CHECK-RV64C-NEXT:    lbu a3, 72(a1)
-; CHECK-RV64C-NEXT:    lbu a4, 80(a1)
-; CHECK-RV64C-NEXT:    lbu a5, 88(a1)
-; CHECK-RV64C-NEXT:    lbu a2, 96(a1)
-; CHECK-RV64C-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64C-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64C-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64C-NEXT:    lb a6, 0(a1)
+; CHECK-RV64C-NEXT:    lb a7, 8(a1)
+; CHECK-RV64C-NEXT:    lb t0, 16(a1)
+; CHECK-RV64C-NEXT:    lb t1, 24(a1)
+; CHECK-RV64C-NEXT:    lb t2, 32(a1)
+; CHECK-RV64C-NEXT:    lb t3, 40(a1)
+; CHECK-RV64C-NEXT:    lb t4, 48(a1)
+; CHECK-RV64C-NEXT:    lb t5, 56(a1)
+; CHECK-RV64C-NEXT:    lb t6, 64(a1)
+; CHECK-RV64C-NEXT:    lb a3, 72(a1)
+; CHECK-RV64C-NEXT:    lb a4, 80(a1)
+; CHECK-RV64C-NEXT:    lb a5, 88(a1)
+; CHECK-RV64C-NEXT:    lb a2, 96(a1)
+; CHECK-RV64C-NEXT:    lb s0, 104(a1)
+; CHECK-RV64C-NEXT:    lb s1, 112(a1)
+; CHECK-RV64C-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64C-NEXT:    c.ntl.s1
 ; CHECK-RV64C-NEXT:    sb a2, 12(a0)
 ; CHECK-RV64C-NEXT:    c.ntl.s1
@@ -6611,22 +6611,22 @@ define void @test_nontemporal_ALL_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64-NEXT:    lbu a2, 0(a1)
-; CHECK-RV64-NEXT:    lbu a3, 8(a1)
-; CHECK-RV64-NEXT:    lbu a4, 16(a1)
-; CHECK-RV64-NEXT:    lbu a5, 24(a1)
-; CHECK-RV64-NEXT:    lbu a6, 32(a1)
-; CHECK-RV64-NEXT:    lbu a7, 40(a1)
-; CHECK-RV64-NEXT:    lbu t0, 48(a1)
-; CHECK-RV64-NEXT:    lbu t1, 56(a1)
-; CHECK-RV64-NEXT:    lbu t2, 64(a1)
-; CHECK-RV64-NEXT:    lbu t3, 72(a1)
-; CHECK-RV64-NEXT:    lbu t4, 80(a1)
-; CHECK-RV64-NEXT:    lbu t5, 88(a1)
-; CHECK-RV64-NEXT:    lbu t6, 96(a1)
-; CHECK-RV64-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64-NEXT:    lb a2, 0(a1)
+; CHECK-RV64-NEXT:    lb a3, 8(a1)
+; CHECK-RV64-NEXT:    lb a4, 16(a1)
+; CHECK-RV64-NEXT:    lb a5, 24(a1)
+; CHECK-RV64-NEXT:    lb a6, 32(a1)
+; CHECK-RV64-NEXT:    lb a7, 40(a1)
+; CHECK-RV64-NEXT:    lb t0, 48(a1)
+; CHECK-RV64-NEXT:    lb t1, 56(a1)
+; CHECK-RV64-NEXT:    lb t2, 64(a1)
+; CHECK-RV64-NEXT:    lb t3, 72(a1)
+; CHECK-RV64-NEXT:    lb t4, 80(a1)
+; CHECK-RV64-NEXT:    lb t5, 88(a1)
+; CHECK-RV64-NEXT:    lb t6, 96(a1)
+; CHECK-RV64-NEXT:    lb s0, 104(a1)
+; CHECK-RV64-NEXT:    lb s1, 112(a1)
+; CHECK-RV64-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64-NEXT:    ntl.all
 ; CHECK-RV64-NEXT:    sb t6, 12(a0)
 ; CHECK-RV64-NEXT:    ntl.all
@@ -6739,22 +6739,22 @@ define void @test_nontemporal_ALL_store_v16i8(ptr %p, <16 x i8> %v) {
 ; CHECK-RV64C-NEXT:    sd s1, 0(sp) # 8-byte Folded Spill
 ; CHECK-RV64C-NEXT:    .cfi_offset s0, -8
 ; CHECK-RV64C-NEXT:    .cfi_offset s1, -16
-; CHECK-RV64C-NEXT:    lbu a6, 0(a1)
-; CHECK-RV64C-NEXT:    lbu a7, 8(a1)
-; CHECK-RV64C-NEXT:    lbu t0, 16(a1)
-; CHECK-RV64C-NEXT:    lbu t1, 24(a1)
-; CHECK-RV64C-NEXT:    lbu t2, 32(a1)
-; CHECK-RV64C-NEXT:    lbu t3, 40(a1)
-; CHECK-RV64C-NEXT:    lbu t4, 48(a1)
-; CHECK-RV64C-NEXT:    lbu t5, 56(a1)
-; CHECK-RV64C-NEXT:    lbu t6, 64(a1)
-; CHECK-RV64C-NEXT:    lbu a3, 72(a1)
-; CHECK-RV64C-NEXT:    lbu a4, 80(a1)
-; CHECK-RV64C-NEXT:    lbu a5, 88(a1)
-; CHECK-RV64C-NEXT:    lbu a2, 96(a1)
-; CHECK-RV64C-NEXT:    lbu s0, 104(a1)
-; CHECK-RV64C-NEXT:    lbu s1, 112(a1)
-; CHECK-RV64C-NEXT:    lbu a1, 120(a1)
+; CHECK-RV64C-NEXT:    lb a6, 0(a1)
+; CHECK-RV64C-NEXT:    lb a7, 8(a1)
+; CHECK-RV64C-NEXT:    lb t0, 16(a1)
+; CHECK-RV64C-NEXT:    lb t1, 24(a1)
+; CHECK-RV64C-NEXT:    lb t2, 32(a1)
+; CHECK-RV64C-NEXT:    lb t3, 40(a1)
+; CHECK-RV64C-NEXT:    lb t4, 48(a1)
+; CHECK-RV64C-NEXT:    lb t5, 56(a1)
+; CHECK-RV64C-NEXT:    lb t6, 64(a1)
+; CHECK-RV64C-NEXT:    lb a3, 72(a1)
+; CHECK-RV64C-NEXT:    lb a4, 80(a1)
+; CHECK-RV64C-NEXT:    lb a5, 88(a1)
+; CHECK-RV64C-NEXT:    lb a2, 96(a1)
+; CHECK-RV64C-NEXT:    lb s0, 104(a1)
+; CHECK-RV64C-NEXT:    lb s1, 112(a1)
+; CHECK-RV64C-NEXT:    lb a1, 120(a1)
 ; CHECK-RV64C-NEXT:    c.ntl.all
 ; CHECK-RV64C-NEXT:    sb a2, 12(a0)
 ; CHECK-RV64C-NEXT:    c.ntl.all
diff --git a/llvm/test/CodeGen/RISCV/prefer-w-inst.mir b/llvm/test/CodeGen/RISCV/prefer-w-inst.mir
index e05e27af4271c..b8ff7832fc7de 100644
--- a/llvm/test/CodeGen/RISCV/prefer-w-inst.mir
+++ b/llvm/test/CodeGen/RISCV/prefer-w-inst.mir
@@ -239,8 +239,8 @@ body:             |
     ; NO-PREFER-W-INST-NEXT: {{  $}}
     ; NO-PREFER-W-INST-NEXT: [[COPY:%[0-9]+]]:gpr = COPY $x10
     ; NO-PREFER-W-INST-NEXT: [[COPY1:%[0-9]+]]:gpr = COPY $x11
-    ; NO-PREFER-W-INST-NEXT: [[LWU:%[0-9]+]]:gpr = LWU [[COPY]], 0
-    ; NO-PREFER-W-INST-NEXT: [[ADDIW:%[0-9]+]]:gpr = ADDIW [[LWU]], 1
+    ; NO-PREFER-W-INST-NEXT: [[LW:%[0-9]+]]:gpr = LW [[COPY]], 0
+    ; NO-PREFER-W-INST-NEXT: [[ADDIW:%[0-9]+]]:gpr = ADDIW [[LW]], 1
     ; NO-PREFER-W-INST-NEXT: $x10 = COPY [[ADDIW]]
     ; NO-PREFER-W-INST-NEXT: PseudoRET
     ;
diff --git a/llvm/test/CodeGen/RISCV/rv64zbb.ll b/llvm/test/CodeGen/RISCV/rv64zbb.ll
index e6407279870db..187616f808281 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbb.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbb.ll
@@ -701,7 +701,7 @@ define signext i32 @ctpop_i32_load(ptr %p) nounwind {
 ;
 ; RV64ZBB-LABEL: ctpop_i32_load:
 ; RV64ZBB:       # %bb.0:
-; RV64ZBB-NEXT:    lwu a0, 0(a0)
+; RV64ZBB-NEXT:    lw a0, 0(a0)
 ; RV64ZBB-NEXT:    cpopw a0, a0
 ; RV64ZBB-NEXT:    ret
   %a = load i32, ptr %p
diff --git a/llvm/test/CodeGen/RISCV/rv64zbkb.ll b/llvm/test/CodeGen/RISCV/rv64zbkb.ll
index 696c2a5e0f806..818ea723ca2e1 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbkb.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbkb.ll
@@ -114,7 +114,7 @@ define i64 @pack_i64_3(ptr %0, ptr %1) {
 ; RV64ZBKB-LABEL: pack_i64_3:
 ; RV64ZBKB:       # %bb.0:
 ; RV64ZBKB-NEXT:    lw a0, 0(a0)
-; RV64ZBKB-NEXT:    lwu a1, 0(a1)
+; RV64ZBKB-NEXT:    lw a1, 0(a1)
 ; RV64ZBKB-NEXT:    pack a0, a1, a0
 ; RV64ZBKB-NEXT:    ret
   %3 = load i32, ptr %0, align 4
diff --git a/llvm/test/CodeGen/RISCV/rvv/expandload.ll b/llvm/test/CodeGen/RISCV/rvv/expandload.ll
index 9173fa4622487..aae3f579ad172 100644
--- a/llvm/test/CodeGen/RISCV/rvv/expandload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/expandload.ll
@@ -10803,7 +10803,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a1, a2, 2
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_63
 ; CHECK-RV64-NEXT:  .LBB61_62: # %cond.load241
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -10820,7 +10820,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 1
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_65
 ; CHECK-RV64-NEXT:  # %bb.64: # %cond.load245
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
 ; CHECK-RV64-NEXT:    li a1, 63
@@ -11083,7 +11083,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a2, a1, 2
 ; CHECK-RV64-NEXT:    bgez a2, .LBB61_129
 ; CHECK-RV64-NEXT:  .LBB61_128: # %cond.load497
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -11100,7 +11100,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 2
 ; CHECK-RV64-NEXT:    bgez a2, .LBB61_131
 ; CHECK-RV64-NEXT:  # %bb.130: # %cond.load501
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
 ; CHECK-RV64-NEXT:    li a2, 127
@@ -11363,7 +11363,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a1, a2, 2
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_195
 ; CHECK-RV64-NEXT:  .LBB61_194: # %cond.load753
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -11380,7 +11380,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 3
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_197
 ; CHECK-RV64-NEXT:  # %bb.196: # %cond.load757
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
 ; CHECK-RV64-NEXT:    li a1, 191
@@ -11643,7 +11643,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a2, a1, 2
 ; CHECK-RV64-NEXT:    bgez a2, .LBB61_261
 ; CHECK-RV64-NEXT:  .LBB61_260: # %cond.load1009
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -11660,7 +11660,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 4
 ; CHECK-RV64-NEXT:    bgez a2, .LBB61_263
 ; CHECK-RV64-NEXT:  # %bb.262: # %cond.load1013
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
 ; CHECK-RV64-NEXT:    li a2, 255
@@ -11923,7 +11923,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a1, a2, 2
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_327
 ; CHECK-RV64-NEXT:  .LBB61_326: # %cond.load1265
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -11938,7 +11938,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 5
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_329
 ; CHECK-RV64-NEXT:  # %bb.328: # %cond.load1269
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
 ; CHECK-RV64-NEXT:    li a1, 319
 ; CHECK-RV64-NEXT:    li a3, 318
@@ -12198,7 +12198,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a2, a1, 2
 ; CHECK-RV64-NEXT:    bgez a2, .LBB61_393
 ; CHECK-RV64-NEXT:  .LBB61_392: # %cond.load1521
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -12213,7 +12213,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 6
 ; CHECK-RV64-NEXT:    bgez a2, .LBB61_395
 ; CHECK-RV64-NEXT:  # %bb.394: # %cond.load1525
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
 ; CHECK-RV64-NEXT:    li a2, 383
 ; CHECK-RV64-NEXT:    li a3, 382
@@ -12473,7 +12473,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    slli a1, a2, 2
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_459
 ; CHECK-RV64-NEXT:  .LBB61_458: # %cond.load1777
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -12488,7 +12488,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    vslidedown.vi v24, v0, 7
 ; CHECK-RV64-NEXT:    bgez a1, .LBB61_461
 ; CHECK-RV64-NEXT:  # %bb.460: # %cond.load1781
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
 ; CHECK-RV64-NEXT:    li a1, 447
 ; CHECK-RV64-NEXT:    li a3, 446
@@ -12758,7 +12758,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_526: # %else2042
 ; CHECK-RV64-NEXT:    ret
 ; CHECK-RV64-NEXT:  .LBB61_527: # %cond.load
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vsetvli zero, zero, e8, mf8, tu, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v8, a1
@@ -12769,7 +12769,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_528
 ; CHECK-RV64-NEXT:    j .LBB61_2
 ; CHECK-RV64-NEXT:  .LBB61_528: # %cond.load1
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 2, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12781,7 +12781,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_529
 ; CHECK-RV64-NEXT:    j .LBB61_3
 ; CHECK-RV64-NEXT:  .LBB61_529: # %cond.load5
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 3, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12793,7 +12793,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_530
 ; CHECK-RV64-NEXT:    j .LBB61_4
 ; CHECK-RV64-NEXT:  .LBB61_530: # %cond.load9
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 4, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12805,7 +12805,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_531
 ; CHECK-RV64-NEXT:    j .LBB61_5
 ; CHECK-RV64-NEXT:  .LBB61_531: # %cond.load13
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 5, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12817,7 +12817,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_532
 ; CHECK-RV64-NEXT:    j .LBB61_6
 ; CHECK-RV64-NEXT:  .LBB61_532: # %cond.load17
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 6, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12829,7 +12829,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_533
 ; CHECK-RV64-NEXT:    j .LBB61_7
 ; CHECK-RV64-NEXT:  .LBB61_533: # %cond.load21
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 7, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12841,7 +12841,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_534
 ; CHECK-RV64-NEXT:    j .LBB61_8
 ; CHECK-RV64-NEXT:  .LBB61_534: # %cond.load25
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 8, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12853,7 +12853,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_535
 ; CHECK-RV64-NEXT:    j .LBB61_9
 ; CHECK-RV64-NEXT:  .LBB61_535: # %cond.load29
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 9, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12865,7 +12865,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_536
 ; CHECK-RV64-NEXT:    j .LBB61_10
 ; CHECK-RV64-NEXT:  .LBB61_536: # %cond.load33
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 10, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12877,7 +12877,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_537
 ; CHECK-RV64-NEXT:    j .LBB61_11
 ; CHECK-RV64-NEXT:  .LBB61_537: # %cond.load37
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 11, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12889,7 +12889,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_538
 ; CHECK-RV64-NEXT:    j .LBB61_12
 ; CHECK-RV64-NEXT:  .LBB61_538: # %cond.load41
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 12, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12901,7 +12901,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_539
 ; CHECK-RV64-NEXT:    j .LBB61_13
 ; CHECK-RV64-NEXT:  .LBB61_539: # %cond.load45
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 13, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12913,7 +12913,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_540
 ; CHECK-RV64-NEXT:    j .LBB61_14
 ; CHECK-RV64-NEXT:  .LBB61_540: # %cond.load49
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 14, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12925,7 +12925,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_541
 ; CHECK-RV64-NEXT:    j .LBB61_15
 ; CHECK-RV64-NEXT:  .LBB61_541: # %cond.load53
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 15, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12937,7 +12937,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_542
 ; CHECK-RV64-NEXT:    j .LBB61_16
 ; CHECK-RV64-NEXT:  .LBB61_542: # %cond.load57
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 16, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12949,7 +12949,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_543
 ; CHECK-RV64-NEXT:    j .LBB61_17
 ; CHECK-RV64-NEXT:  .LBB61_543: # %cond.load61
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 17, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12961,7 +12961,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_544
 ; CHECK-RV64-NEXT:    j .LBB61_18
 ; CHECK-RV64-NEXT:  .LBB61_544: # %cond.load65
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 18, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12973,7 +12973,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_545
 ; CHECK-RV64-NEXT:    j .LBB61_19
 ; CHECK-RV64-NEXT:  .LBB61_545: # %cond.load69
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 19, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12985,7 +12985,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_546
 ; CHECK-RV64-NEXT:    j .LBB61_20
 ; CHECK-RV64-NEXT:  .LBB61_546: # %cond.load73
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 20, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -12997,7 +12997,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_547
 ; CHECK-RV64-NEXT:    j .LBB61_21
 ; CHECK-RV64-NEXT:  .LBB61_547: # %cond.load77
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 21, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13009,7 +13009,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_548
 ; CHECK-RV64-NEXT:    j .LBB61_22
 ; CHECK-RV64-NEXT:  .LBB61_548: # %cond.load81
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 22, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13021,7 +13021,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_549
 ; CHECK-RV64-NEXT:    j .LBB61_23
 ; CHECK-RV64-NEXT:  .LBB61_549: # %cond.load85
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 23, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13033,7 +13033,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_550
 ; CHECK-RV64-NEXT:    j .LBB61_24
 ; CHECK-RV64-NEXT:  .LBB61_550: # %cond.load89
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 24, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13045,7 +13045,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_551
 ; CHECK-RV64-NEXT:    j .LBB61_25
 ; CHECK-RV64-NEXT:  .LBB61_551: # %cond.load93
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 25, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13057,7 +13057,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_552
 ; CHECK-RV64-NEXT:    j .LBB61_26
 ; CHECK-RV64-NEXT:  .LBB61_552: # %cond.load97
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 26, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13069,7 +13069,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_553
 ; CHECK-RV64-NEXT:    j .LBB61_27
 ; CHECK-RV64-NEXT:  .LBB61_553: # %cond.load101
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 27, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13081,7 +13081,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_554
 ; CHECK-RV64-NEXT:    j .LBB61_28
 ; CHECK-RV64-NEXT:  .LBB61_554: # %cond.load105
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 28, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13093,7 +13093,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_555
 ; CHECK-RV64-NEXT:    j .LBB61_29
 ; CHECK-RV64-NEXT:  .LBB61_555: # %cond.load109
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 29, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13105,7 +13105,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_556
 ; CHECK-RV64-NEXT:    j .LBB61_30
 ; CHECK-RV64-NEXT:  .LBB61_556: # %cond.load113
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 30, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13117,7 +13117,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_557
 ; CHECK-RV64-NEXT:    j .LBB61_31
 ; CHECK-RV64-NEXT:  .LBB61_557: # %cond.load117
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 31, e8, m1, tu, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13129,7 +13129,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_558
 ; CHECK-RV64-NEXT:    j .LBB61_32
 ; CHECK-RV64-NEXT:  .LBB61_558: # %cond.load121
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13143,7 +13143,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_559
 ; CHECK-RV64-NEXT:    j .LBB61_33
 ; CHECK-RV64-NEXT:  .LBB61_559: # %cond.load125
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13158,7 +13158,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_560
 ; CHECK-RV64-NEXT:    j .LBB61_34
 ; CHECK-RV64-NEXT:  .LBB61_560: # %cond.load129
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13173,7 +13173,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_561
 ; CHECK-RV64-NEXT:    j .LBB61_35
 ; CHECK-RV64-NEXT:  .LBB61_561: # %cond.load133
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13188,7 +13188,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_562
 ; CHECK-RV64-NEXT:    j .LBB61_36
 ; CHECK-RV64-NEXT:  .LBB61_562: # %cond.load137
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13203,7 +13203,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_563
 ; CHECK-RV64-NEXT:    j .LBB61_37
 ; CHECK-RV64-NEXT:  .LBB61_563: # %cond.load141
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13218,7 +13218,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_564
 ; CHECK-RV64-NEXT:    j .LBB61_38
 ; CHECK-RV64-NEXT:  .LBB61_564: # %cond.load145
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13233,7 +13233,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_565
 ; CHECK-RV64-NEXT:    j .LBB61_39
 ; CHECK-RV64-NEXT:  .LBB61_565: # %cond.load149
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13248,7 +13248,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_566
 ; CHECK-RV64-NEXT:    j .LBB61_40
 ; CHECK-RV64-NEXT:  .LBB61_566: # %cond.load153
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13263,7 +13263,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_567
 ; CHECK-RV64-NEXT:    j .LBB61_41
 ; CHECK-RV64-NEXT:  .LBB61_567: # %cond.load157
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13278,7 +13278,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_568
 ; CHECK-RV64-NEXT:    j .LBB61_42
 ; CHECK-RV64-NEXT:  .LBB61_568: # %cond.load161
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13293,7 +13293,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_569
 ; CHECK-RV64-NEXT:    j .LBB61_43
 ; CHECK-RV64-NEXT:  .LBB61_569: # %cond.load165
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13308,7 +13308,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_570
 ; CHECK-RV64-NEXT:    j .LBB61_44
 ; CHECK-RV64-NEXT:  .LBB61_570: # %cond.load169
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13323,7 +13323,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_571
 ; CHECK-RV64-NEXT:    j .LBB61_45
 ; CHECK-RV64-NEXT:  .LBB61_571: # %cond.load173
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13338,7 +13338,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_572
 ; CHECK-RV64-NEXT:    j .LBB61_46
 ; CHECK-RV64-NEXT:  .LBB61_572: # %cond.load177
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13353,7 +13353,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_573
 ; CHECK-RV64-NEXT:    j .LBB61_47
 ; CHECK-RV64-NEXT:  .LBB61_573: # %cond.load181
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13368,7 +13368,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_574
 ; CHECK-RV64-NEXT:    j .LBB61_48
 ; CHECK-RV64-NEXT:  .LBB61_574: # %cond.load185
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13383,7 +13383,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_575
 ; CHECK-RV64-NEXT:    j .LBB61_49
 ; CHECK-RV64-NEXT:  .LBB61_575: # %cond.load189
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13398,7 +13398,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_576
 ; CHECK-RV64-NEXT:    j .LBB61_50
 ; CHECK-RV64-NEXT:  .LBB61_576: # %cond.load193
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13413,7 +13413,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_577
 ; CHECK-RV64-NEXT:    j .LBB61_51
 ; CHECK-RV64-NEXT:  .LBB61_577: # %cond.load197
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13428,7 +13428,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_578
 ; CHECK-RV64-NEXT:    j .LBB61_52
 ; CHECK-RV64-NEXT:  .LBB61_578: # %cond.load201
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13443,7 +13443,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_579
 ; CHECK-RV64-NEXT:    j .LBB61_53
 ; CHECK-RV64-NEXT:  .LBB61_579: # %cond.load205
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13458,7 +13458,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_580
 ; CHECK-RV64-NEXT:    j .LBB61_54
 ; CHECK-RV64-NEXT:  .LBB61_580: # %cond.load209
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13473,7 +13473,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_581
 ; CHECK-RV64-NEXT:    j .LBB61_55
 ; CHECK-RV64-NEXT:  .LBB61_581: # %cond.load213
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13488,7 +13488,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_582
 ; CHECK-RV64-NEXT:    j .LBB61_56
 ; CHECK-RV64-NEXT:  .LBB61_582: # %cond.load217
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13503,7 +13503,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_583
 ; CHECK-RV64-NEXT:    j .LBB61_57
 ; CHECK-RV64-NEXT:  .LBB61_583: # %cond.load221
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13518,7 +13518,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_584
 ; CHECK-RV64-NEXT:    j .LBB61_58
 ; CHECK-RV64-NEXT:  .LBB61_584: # %cond.load225
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13533,7 +13533,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_585
 ; CHECK-RV64-NEXT:    j .LBB61_59
 ; CHECK-RV64-NEXT:  .LBB61_585: # %cond.load229
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13548,7 +13548,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_586
 ; CHECK-RV64-NEXT:    j .LBB61_60
 ; CHECK-RV64-NEXT:  .LBB61_586: # %cond.load233
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13563,7 +13563,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_587
 ; CHECK-RV64-NEXT:    j .LBB61_61
 ; CHECK-RV64-NEXT:  .LBB61_587: # %cond.load237
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a1
@@ -13580,7 +13580,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1025: # %cond.load237
 ; CHECK-RV64-NEXT:    j .LBB61_63
 ; CHECK-RV64-NEXT:  .LBB61_588: # %cond.load249
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v9, a2
 ; CHECK-RV64-NEXT:    li a2, 64
@@ -13594,7 +13594,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_589
 ; CHECK-RV64-NEXT:    j .LBB61_67
 ; CHECK-RV64-NEXT:  .LBB61_589: # %cond.load253
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13609,7 +13609,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_590
 ; CHECK-RV64-NEXT:    j .LBB61_68
 ; CHECK-RV64-NEXT:  .LBB61_590: # %cond.load257
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13624,7 +13624,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_591
 ; CHECK-RV64-NEXT:    j .LBB61_69
 ; CHECK-RV64-NEXT:  .LBB61_591: # %cond.load261
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13639,7 +13639,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_592
 ; CHECK-RV64-NEXT:    j .LBB61_70
 ; CHECK-RV64-NEXT:  .LBB61_592: # %cond.load265
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13654,7 +13654,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_593
 ; CHECK-RV64-NEXT:    j .LBB61_71
 ; CHECK-RV64-NEXT:  .LBB61_593: # %cond.load269
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13669,7 +13669,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_594
 ; CHECK-RV64-NEXT:    j .LBB61_72
 ; CHECK-RV64-NEXT:  .LBB61_594: # %cond.load273
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13684,7 +13684,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_595
 ; CHECK-RV64-NEXT:    j .LBB61_73
 ; CHECK-RV64-NEXT:  .LBB61_595: # %cond.load277
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13699,7 +13699,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_596
 ; CHECK-RV64-NEXT:    j .LBB61_74
 ; CHECK-RV64-NEXT:  .LBB61_596: # %cond.load281
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13714,7 +13714,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_597
 ; CHECK-RV64-NEXT:    j .LBB61_75
 ; CHECK-RV64-NEXT:  .LBB61_597: # %cond.load285
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13729,7 +13729,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_598
 ; CHECK-RV64-NEXT:    j .LBB61_76
 ; CHECK-RV64-NEXT:  .LBB61_598: # %cond.load289
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13744,7 +13744,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_599
 ; CHECK-RV64-NEXT:    j .LBB61_77
 ; CHECK-RV64-NEXT:  .LBB61_599: # %cond.load293
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13759,7 +13759,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_600
 ; CHECK-RV64-NEXT:    j .LBB61_78
 ; CHECK-RV64-NEXT:  .LBB61_600: # %cond.load297
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13774,7 +13774,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_601
 ; CHECK-RV64-NEXT:    j .LBB61_79
 ; CHECK-RV64-NEXT:  .LBB61_601: # %cond.load301
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13789,7 +13789,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_602
 ; CHECK-RV64-NEXT:    j .LBB61_80
 ; CHECK-RV64-NEXT:  .LBB61_602: # %cond.load305
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13804,7 +13804,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_603
 ; CHECK-RV64-NEXT:    j .LBB61_81
 ; CHECK-RV64-NEXT:  .LBB61_603: # %cond.load309
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13819,7 +13819,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_604
 ; CHECK-RV64-NEXT:    j .LBB61_82
 ; CHECK-RV64-NEXT:  .LBB61_604: # %cond.load313
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13834,7 +13834,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_605
 ; CHECK-RV64-NEXT:    j .LBB61_83
 ; CHECK-RV64-NEXT:  .LBB61_605: # %cond.load317
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13849,7 +13849,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_606
 ; CHECK-RV64-NEXT:    j .LBB61_84
 ; CHECK-RV64-NEXT:  .LBB61_606: # %cond.load321
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13864,7 +13864,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_607
 ; CHECK-RV64-NEXT:    j .LBB61_85
 ; CHECK-RV64-NEXT:  .LBB61_607: # %cond.load325
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13879,7 +13879,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_608
 ; CHECK-RV64-NEXT:    j .LBB61_86
 ; CHECK-RV64-NEXT:  .LBB61_608: # %cond.load329
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13894,7 +13894,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_609
 ; CHECK-RV64-NEXT:    j .LBB61_87
 ; CHECK-RV64-NEXT:  .LBB61_609: # %cond.load333
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13909,7 +13909,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_610
 ; CHECK-RV64-NEXT:    j .LBB61_88
 ; CHECK-RV64-NEXT:  .LBB61_610: # %cond.load337
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13924,7 +13924,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_611
 ; CHECK-RV64-NEXT:    j .LBB61_89
 ; CHECK-RV64-NEXT:  .LBB61_611: # %cond.load341
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13939,7 +13939,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_612
 ; CHECK-RV64-NEXT:    j .LBB61_90
 ; CHECK-RV64-NEXT:  .LBB61_612: # %cond.load345
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13954,7 +13954,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_613
 ; CHECK-RV64-NEXT:    j .LBB61_91
 ; CHECK-RV64-NEXT:  .LBB61_613: # %cond.load349
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13969,7 +13969,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_614
 ; CHECK-RV64-NEXT:    j .LBB61_92
 ; CHECK-RV64-NEXT:  .LBB61_614: # %cond.load353
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13984,7 +13984,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_615
 ; CHECK-RV64-NEXT:    j .LBB61_93
 ; CHECK-RV64-NEXT:  .LBB61_615: # %cond.load357
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -13999,7 +13999,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_616
 ; CHECK-RV64-NEXT:    j .LBB61_94
 ; CHECK-RV64-NEXT:  .LBB61_616: # %cond.load361
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14014,7 +14014,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_617
 ; CHECK-RV64-NEXT:    j .LBB61_95
 ; CHECK-RV64-NEXT:  .LBB61_617: # %cond.load365
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14029,7 +14029,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_618
 ; CHECK-RV64-NEXT:    j .LBB61_96
 ; CHECK-RV64-NEXT:  .LBB61_618: # %cond.load369
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14044,7 +14044,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_619
 ; CHECK-RV64-NEXT:    j .LBB61_97
 ; CHECK-RV64-NEXT:  .LBB61_619: # %cond.load373
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14059,7 +14059,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_620
 ; CHECK-RV64-NEXT:    j .LBB61_98
 ; CHECK-RV64-NEXT:  .LBB61_620: # %cond.load377
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14074,7 +14074,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_621
 ; CHECK-RV64-NEXT:    j .LBB61_99
 ; CHECK-RV64-NEXT:  .LBB61_621: # %cond.load381
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14089,7 +14089,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_622
 ; CHECK-RV64-NEXT:    j .LBB61_100
 ; CHECK-RV64-NEXT:  .LBB61_622: # %cond.load385
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14104,7 +14104,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_623
 ; CHECK-RV64-NEXT:    j .LBB61_101
 ; CHECK-RV64-NEXT:  .LBB61_623: # %cond.load389
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14119,7 +14119,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_624
 ; CHECK-RV64-NEXT:    j .LBB61_102
 ; CHECK-RV64-NEXT:  .LBB61_624: # %cond.load393
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14134,7 +14134,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_625
 ; CHECK-RV64-NEXT:    j .LBB61_103
 ; CHECK-RV64-NEXT:  .LBB61_625: # %cond.load397
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14149,7 +14149,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_626
 ; CHECK-RV64-NEXT:    j .LBB61_104
 ; CHECK-RV64-NEXT:  .LBB61_626: # %cond.load401
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14164,7 +14164,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_627
 ; CHECK-RV64-NEXT:    j .LBB61_105
 ; CHECK-RV64-NEXT:  .LBB61_627: # %cond.load405
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14179,7 +14179,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_628
 ; CHECK-RV64-NEXT:    j .LBB61_106
 ; CHECK-RV64-NEXT:  .LBB61_628: # %cond.load409
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14194,7 +14194,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_629
 ; CHECK-RV64-NEXT:    j .LBB61_107
 ; CHECK-RV64-NEXT:  .LBB61_629: # %cond.load413
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14209,7 +14209,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_630
 ; CHECK-RV64-NEXT:    j .LBB61_108
 ; CHECK-RV64-NEXT:  .LBB61_630: # %cond.load417
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14224,7 +14224,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_631
 ; CHECK-RV64-NEXT:    j .LBB61_109
 ; CHECK-RV64-NEXT:  .LBB61_631: # %cond.load421
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14239,7 +14239,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_632
 ; CHECK-RV64-NEXT:    j .LBB61_110
 ; CHECK-RV64-NEXT:  .LBB61_632: # %cond.load425
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14254,7 +14254,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_633
 ; CHECK-RV64-NEXT:    j .LBB61_111
 ; CHECK-RV64-NEXT:  .LBB61_633: # %cond.load429
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14269,7 +14269,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_634
 ; CHECK-RV64-NEXT:    j .LBB61_112
 ; CHECK-RV64-NEXT:  .LBB61_634: # %cond.load433
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14284,7 +14284,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_635
 ; CHECK-RV64-NEXT:    j .LBB61_113
 ; CHECK-RV64-NEXT:  .LBB61_635: # %cond.load437
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14299,7 +14299,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_636
 ; CHECK-RV64-NEXT:    j .LBB61_114
 ; CHECK-RV64-NEXT:  .LBB61_636: # %cond.load441
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14314,7 +14314,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_637
 ; CHECK-RV64-NEXT:    j .LBB61_115
 ; CHECK-RV64-NEXT:  .LBB61_637: # %cond.load445
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14329,7 +14329,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_638
 ; CHECK-RV64-NEXT:    j .LBB61_116
 ; CHECK-RV64-NEXT:  .LBB61_638: # %cond.load449
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14344,7 +14344,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_639
 ; CHECK-RV64-NEXT:    j .LBB61_117
 ; CHECK-RV64-NEXT:  .LBB61_639: # %cond.load453
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14359,7 +14359,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_640
 ; CHECK-RV64-NEXT:    j .LBB61_118
 ; CHECK-RV64-NEXT:  .LBB61_640: # %cond.load457
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14374,7 +14374,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_641
 ; CHECK-RV64-NEXT:    j .LBB61_119
 ; CHECK-RV64-NEXT:  .LBB61_641: # %cond.load461
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14389,7 +14389,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_642
 ; CHECK-RV64-NEXT:    j .LBB61_120
 ; CHECK-RV64-NEXT:  .LBB61_642: # %cond.load465
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14404,7 +14404,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_643
 ; CHECK-RV64-NEXT:    j .LBB61_121
 ; CHECK-RV64-NEXT:  .LBB61_643: # %cond.load469
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14419,7 +14419,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_644
 ; CHECK-RV64-NEXT:    j .LBB61_122
 ; CHECK-RV64-NEXT:  .LBB61_644: # %cond.load473
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14434,7 +14434,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_645
 ; CHECK-RV64-NEXT:    j .LBB61_123
 ; CHECK-RV64-NEXT:  .LBB61_645: # %cond.load477
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14449,7 +14449,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_646
 ; CHECK-RV64-NEXT:    j .LBB61_124
 ; CHECK-RV64-NEXT:  .LBB61_646: # %cond.load481
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14464,7 +14464,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_647
 ; CHECK-RV64-NEXT:    j .LBB61_125
 ; CHECK-RV64-NEXT:  .LBB61_647: # %cond.load485
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14479,7 +14479,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_648
 ; CHECK-RV64-NEXT:    j .LBB61_126
 ; CHECK-RV64-NEXT:  .LBB61_648: # %cond.load489
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14494,7 +14494,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_649
 ; CHECK-RV64-NEXT:    j .LBB61_127
 ; CHECK-RV64-NEXT:  .LBB61_649: # %cond.load493
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a2
@@ -14511,7 +14511,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1026: # %cond.load493
 ; CHECK-RV64-NEXT:    j .LBB61_129
 ; CHECK-RV64-NEXT:  .LBB61_650: # %cond.load505
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v10, a1
 ; CHECK-RV64-NEXT:    li a1, 128
@@ -14525,7 +14525,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_651
 ; CHECK-RV64-NEXT:    j .LBB61_133
 ; CHECK-RV64-NEXT:  .LBB61_651: # %cond.load509
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14540,7 +14540,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_652
 ; CHECK-RV64-NEXT:    j .LBB61_134
 ; CHECK-RV64-NEXT:  .LBB61_652: # %cond.load513
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14555,7 +14555,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_653
 ; CHECK-RV64-NEXT:    j .LBB61_135
 ; CHECK-RV64-NEXT:  .LBB61_653: # %cond.load517
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14570,7 +14570,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_654
 ; CHECK-RV64-NEXT:    j .LBB61_136
 ; CHECK-RV64-NEXT:  .LBB61_654: # %cond.load521
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14585,7 +14585,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_655
 ; CHECK-RV64-NEXT:    j .LBB61_137
 ; CHECK-RV64-NEXT:  .LBB61_655: # %cond.load525
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14600,7 +14600,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_656
 ; CHECK-RV64-NEXT:    j .LBB61_138
 ; CHECK-RV64-NEXT:  .LBB61_656: # %cond.load529
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14615,7 +14615,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_657
 ; CHECK-RV64-NEXT:    j .LBB61_139
 ; CHECK-RV64-NEXT:  .LBB61_657: # %cond.load533
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14630,7 +14630,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_658
 ; CHECK-RV64-NEXT:    j .LBB61_140
 ; CHECK-RV64-NEXT:  .LBB61_658: # %cond.load537
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14645,7 +14645,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_659
 ; CHECK-RV64-NEXT:    j .LBB61_141
 ; CHECK-RV64-NEXT:  .LBB61_659: # %cond.load541
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14660,7 +14660,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_660
 ; CHECK-RV64-NEXT:    j .LBB61_142
 ; CHECK-RV64-NEXT:  .LBB61_660: # %cond.load545
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14675,7 +14675,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_661
 ; CHECK-RV64-NEXT:    j .LBB61_143
 ; CHECK-RV64-NEXT:  .LBB61_661: # %cond.load549
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14690,7 +14690,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_662
 ; CHECK-RV64-NEXT:    j .LBB61_144
 ; CHECK-RV64-NEXT:  .LBB61_662: # %cond.load553
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14705,7 +14705,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_663
 ; CHECK-RV64-NEXT:    j .LBB61_145
 ; CHECK-RV64-NEXT:  .LBB61_663: # %cond.load557
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14720,7 +14720,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_664
 ; CHECK-RV64-NEXT:    j .LBB61_146
 ; CHECK-RV64-NEXT:  .LBB61_664: # %cond.load561
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14735,7 +14735,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_665
 ; CHECK-RV64-NEXT:    j .LBB61_147
 ; CHECK-RV64-NEXT:  .LBB61_665: # %cond.load565
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14750,7 +14750,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_666
 ; CHECK-RV64-NEXT:    j .LBB61_148
 ; CHECK-RV64-NEXT:  .LBB61_666: # %cond.load569
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14765,7 +14765,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_667
 ; CHECK-RV64-NEXT:    j .LBB61_149
 ; CHECK-RV64-NEXT:  .LBB61_667: # %cond.load573
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14780,7 +14780,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_668
 ; CHECK-RV64-NEXT:    j .LBB61_150
 ; CHECK-RV64-NEXT:  .LBB61_668: # %cond.load577
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14795,7 +14795,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_669
 ; CHECK-RV64-NEXT:    j .LBB61_151
 ; CHECK-RV64-NEXT:  .LBB61_669: # %cond.load581
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14810,7 +14810,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_670
 ; CHECK-RV64-NEXT:    j .LBB61_152
 ; CHECK-RV64-NEXT:  .LBB61_670: # %cond.load585
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14825,7 +14825,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_671
 ; CHECK-RV64-NEXT:    j .LBB61_153
 ; CHECK-RV64-NEXT:  .LBB61_671: # %cond.load589
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14840,7 +14840,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_672
 ; CHECK-RV64-NEXT:    j .LBB61_154
 ; CHECK-RV64-NEXT:  .LBB61_672: # %cond.load593
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14855,7 +14855,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_673
 ; CHECK-RV64-NEXT:    j .LBB61_155
 ; CHECK-RV64-NEXT:  .LBB61_673: # %cond.load597
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14870,7 +14870,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_674
 ; CHECK-RV64-NEXT:    j .LBB61_156
 ; CHECK-RV64-NEXT:  .LBB61_674: # %cond.load601
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14885,7 +14885,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_675
 ; CHECK-RV64-NEXT:    j .LBB61_157
 ; CHECK-RV64-NEXT:  .LBB61_675: # %cond.load605
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14900,7 +14900,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_676
 ; CHECK-RV64-NEXT:    j .LBB61_158
 ; CHECK-RV64-NEXT:  .LBB61_676: # %cond.load609
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14915,7 +14915,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_677
 ; CHECK-RV64-NEXT:    j .LBB61_159
 ; CHECK-RV64-NEXT:  .LBB61_677: # %cond.load613
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14930,7 +14930,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_678
 ; CHECK-RV64-NEXT:    j .LBB61_160
 ; CHECK-RV64-NEXT:  .LBB61_678: # %cond.load617
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14945,7 +14945,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_679
 ; CHECK-RV64-NEXT:    j .LBB61_161
 ; CHECK-RV64-NEXT:  .LBB61_679: # %cond.load621
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14960,7 +14960,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_680
 ; CHECK-RV64-NEXT:    j .LBB61_162
 ; CHECK-RV64-NEXT:  .LBB61_680: # %cond.load625
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14975,7 +14975,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_681
 ; CHECK-RV64-NEXT:    j .LBB61_163
 ; CHECK-RV64-NEXT:  .LBB61_681: # %cond.load629
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -14990,7 +14990,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_682
 ; CHECK-RV64-NEXT:    j .LBB61_164
 ; CHECK-RV64-NEXT:  .LBB61_682: # %cond.load633
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15005,7 +15005,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_683
 ; CHECK-RV64-NEXT:    j .LBB61_165
 ; CHECK-RV64-NEXT:  .LBB61_683: # %cond.load637
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15020,7 +15020,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_684
 ; CHECK-RV64-NEXT:    j .LBB61_166
 ; CHECK-RV64-NEXT:  .LBB61_684: # %cond.load641
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15035,7 +15035,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_685
 ; CHECK-RV64-NEXT:    j .LBB61_167
 ; CHECK-RV64-NEXT:  .LBB61_685: # %cond.load645
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15050,7 +15050,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_686
 ; CHECK-RV64-NEXT:    j .LBB61_168
 ; CHECK-RV64-NEXT:  .LBB61_686: # %cond.load649
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15065,7 +15065,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_687
 ; CHECK-RV64-NEXT:    j .LBB61_169
 ; CHECK-RV64-NEXT:  .LBB61_687: # %cond.load653
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15080,7 +15080,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_688
 ; CHECK-RV64-NEXT:    j .LBB61_170
 ; CHECK-RV64-NEXT:  .LBB61_688: # %cond.load657
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15095,7 +15095,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_689
 ; CHECK-RV64-NEXT:    j .LBB61_171
 ; CHECK-RV64-NEXT:  .LBB61_689: # %cond.load661
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15110,7 +15110,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_690
 ; CHECK-RV64-NEXT:    j .LBB61_172
 ; CHECK-RV64-NEXT:  .LBB61_690: # %cond.load665
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15125,7 +15125,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_691
 ; CHECK-RV64-NEXT:    j .LBB61_173
 ; CHECK-RV64-NEXT:  .LBB61_691: # %cond.load669
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15140,7 +15140,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_692
 ; CHECK-RV64-NEXT:    j .LBB61_174
 ; CHECK-RV64-NEXT:  .LBB61_692: # %cond.load673
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15155,7 +15155,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_693
 ; CHECK-RV64-NEXT:    j .LBB61_175
 ; CHECK-RV64-NEXT:  .LBB61_693: # %cond.load677
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15170,7 +15170,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_694
 ; CHECK-RV64-NEXT:    j .LBB61_176
 ; CHECK-RV64-NEXT:  .LBB61_694: # %cond.load681
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15185,7 +15185,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_695
 ; CHECK-RV64-NEXT:    j .LBB61_177
 ; CHECK-RV64-NEXT:  .LBB61_695: # %cond.load685
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15200,7 +15200,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_696
 ; CHECK-RV64-NEXT:    j .LBB61_178
 ; CHECK-RV64-NEXT:  .LBB61_696: # %cond.load689
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15215,7 +15215,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_697
 ; CHECK-RV64-NEXT:    j .LBB61_179
 ; CHECK-RV64-NEXT:  .LBB61_697: # %cond.load693
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15230,7 +15230,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_698
 ; CHECK-RV64-NEXT:    j .LBB61_180
 ; CHECK-RV64-NEXT:  .LBB61_698: # %cond.load697
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15245,7 +15245,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_699
 ; CHECK-RV64-NEXT:    j .LBB61_181
 ; CHECK-RV64-NEXT:  .LBB61_699: # %cond.load701
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15260,7 +15260,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_700
 ; CHECK-RV64-NEXT:    j .LBB61_182
 ; CHECK-RV64-NEXT:  .LBB61_700: # %cond.load705
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15275,7 +15275,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_701
 ; CHECK-RV64-NEXT:    j .LBB61_183
 ; CHECK-RV64-NEXT:  .LBB61_701: # %cond.load709
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15290,7 +15290,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_702
 ; CHECK-RV64-NEXT:    j .LBB61_184
 ; CHECK-RV64-NEXT:  .LBB61_702: # %cond.load713
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15305,7 +15305,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_703
 ; CHECK-RV64-NEXT:    j .LBB61_185
 ; CHECK-RV64-NEXT:  .LBB61_703: # %cond.load717
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15320,7 +15320,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_704
 ; CHECK-RV64-NEXT:    j .LBB61_186
 ; CHECK-RV64-NEXT:  .LBB61_704: # %cond.load721
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15335,7 +15335,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_705
 ; CHECK-RV64-NEXT:    j .LBB61_187
 ; CHECK-RV64-NEXT:  .LBB61_705: # %cond.load725
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15350,7 +15350,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_706
 ; CHECK-RV64-NEXT:    j .LBB61_188
 ; CHECK-RV64-NEXT:  .LBB61_706: # %cond.load729
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15365,7 +15365,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_707
 ; CHECK-RV64-NEXT:    j .LBB61_189
 ; CHECK-RV64-NEXT:  .LBB61_707: # %cond.load733
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15380,7 +15380,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_708
 ; CHECK-RV64-NEXT:    j .LBB61_190
 ; CHECK-RV64-NEXT:  .LBB61_708: # %cond.load737
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15395,7 +15395,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_709
 ; CHECK-RV64-NEXT:    j .LBB61_191
 ; CHECK-RV64-NEXT:  .LBB61_709: # %cond.load741
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15410,7 +15410,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_710
 ; CHECK-RV64-NEXT:    j .LBB61_192
 ; CHECK-RV64-NEXT:  .LBB61_710: # %cond.load745
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15425,7 +15425,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_711
 ; CHECK-RV64-NEXT:    j .LBB61_193
 ; CHECK-RV64-NEXT:  .LBB61_711: # %cond.load749
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
@@ -15442,7 +15442,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1027: # %cond.load749
 ; CHECK-RV64-NEXT:    j .LBB61_195
 ; CHECK-RV64-NEXT:  .LBB61_712: # %cond.load761
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
 ; CHECK-RV64-NEXT:    li a2, 192
@@ -15456,7 +15456,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_713
 ; CHECK-RV64-NEXT:    j .LBB61_199
 ; CHECK-RV64-NEXT:  .LBB61_713: # %cond.load765
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15471,7 +15471,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_714
 ; CHECK-RV64-NEXT:    j .LBB61_200
 ; CHECK-RV64-NEXT:  .LBB61_714: # %cond.load769
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15486,7 +15486,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_715
 ; CHECK-RV64-NEXT:    j .LBB61_201
 ; CHECK-RV64-NEXT:  .LBB61_715: # %cond.load773
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15501,7 +15501,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_716
 ; CHECK-RV64-NEXT:    j .LBB61_202
 ; CHECK-RV64-NEXT:  .LBB61_716: # %cond.load777
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15516,7 +15516,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_717
 ; CHECK-RV64-NEXT:    j .LBB61_203
 ; CHECK-RV64-NEXT:  .LBB61_717: # %cond.load781
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15531,7 +15531,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_718
 ; CHECK-RV64-NEXT:    j .LBB61_204
 ; CHECK-RV64-NEXT:  .LBB61_718: # %cond.load785
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15546,7 +15546,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_719
 ; CHECK-RV64-NEXT:    j .LBB61_205
 ; CHECK-RV64-NEXT:  .LBB61_719: # %cond.load789
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15561,7 +15561,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_720
 ; CHECK-RV64-NEXT:    j .LBB61_206
 ; CHECK-RV64-NEXT:  .LBB61_720: # %cond.load793
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15576,7 +15576,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_721
 ; CHECK-RV64-NEXT:    j .LBB61_207
 ; CHECK-RV64-NEXT:  .LBB61_721: # %cond.load797
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15591,7 +15591,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_722
 ; CHECK-RV64-NEXT:    j .LBB61_208
 ; CHECK-RV64-NEXT:  .LBB61_722: # %cond.load801
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15606,7 +15606,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_723
 ; CHECK-RV64-NEXT:    j .LBB61_209
 ; CHECK-RV64-NEXT:  .LBB61_723: # %cond.load805
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15621,7 +15621,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_724
 ; CHECK-RV64-NEXT:    j .LBB61_210
 ; CHECK-RV64-NEXT:  .LBB61_724: # %cond.load809
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15636,7 +15636,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_725
 ; CHECK-RV64-NEXT:    j .LBB61_211
 ; CHECK-RV64-NEXT:  .LBB61_725: # %cond.load813
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15651,7 +15651,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_726
 ; CHECK-RV64-NEXT:    j .LBB61_212
 ; CHECK-RV64-NEXT:  .LBB61_726: # %cond.load817
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15666,7 +15666,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_727
 ; CHECK-RV64-NEXT:    j .LBB61_213
 ; CHECK-RV64-NEXT:  .LBB61_727: # %cond.load821
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15681,7 +15681,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_728
 ; CHECK-RV64-NEXT:    j .LBB61_214
 ; CHECK-RV64-NEXT:  .LBB61_728: # %cond.load825
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15696,7 +15696,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_729
 ; CHECK-RV64-NEXT:    j .LBB61_215
 ; CHECK-RV64-NEXT:  .LBB61_729: # %cond.load829
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15711,7 +15711,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_730
 ; CHECK-RV64-NEXT:    j .LBB61_216
 ; CHECK-RV64-NEXT:  .LBB61_730: # %cond.load833
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15726,7 +15726,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_731
 ; CHECK-RV64-NEXT:    j .LBB61_217
 ; CHECK-RV64-NEXT:  .LBB61_731: # %cond.load837
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15741,7 +15741,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_732
 ; CHECK-RV64-NEXT:    j .LBB61_218
 ; CHECK-RV64-NEXT:  .LBB61_732: # %cond.load841
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15756,7 +15756,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_733
 ; CHECK-RV64-NEXT:    j .LBB61_219
 ; CHECK-RV64-NEXT:  .LBB61_733: # %cond.load845
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15771,7 +15771,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_734
 ; CHECK-RV64-NEXT:    j .LBB61_220
 ; CHECK-RV64-NEXT:  .LBB61_734: # %cond.load849
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15786,7 +15786,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_735
 ; CHECK-RV64-NEXT:    j .LBB61_221
 ; CHECK-RV64-NEXT:  .LBB61_735: # %cond.load853
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15801,7 +15801,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_736
 ; CHECK-RV64-NEXT:    j .LBB61_222
 ; CHECK-RV64-NEXT:  .LBB61_736: # %cond.load857
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15816,7 +15816,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_737
 ; CHECK-RV64-NEXT:    j .LBB61_223
 ; CHECK-RV64-NEXT:  .LBB61_737: # %cond.load861
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15831,7 +15831,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_738
 ; CHECK-RV64-NEXT:    j .LBB61_224
 ; CHECK-RV64-NEXT:  .LBB61_738: # %cond.load865
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15846,7 +15846,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_739
 ; CHECK-RV64-NEXT:    j .LBB61_225
 ; CHECK-RV64-NEXT:  .LBB61_739: # %cond.load869
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15861,7 +15861,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_740
 ; CHECK-RV64-NEXT:    j .LBB61_226
 ; CHECK-RV64-NEXT:  .LBB61_740: # %cond.load873
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15876,7 +15876,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_741
 ; CHECK-RV64-NEXT:    j .LBB61_227
 ; CHECK-RV64-NEXT:  .LBB61_741: # %cond.load877
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15891,7 +15891,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_742
 ; CHECK-RV64-NEXT:    j .LBB61_228
 ; CHECK-RV64-NEXT:  .LBB61_742: # %cond.load881
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15906,7 +15906,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_743
 ; CHECK-RV64-NEXT:    j .LBB61_229
 ; CHECK-RV64-NEXT:  .LBB61_743: # %cond.load885
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15921,7 +15921,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_744
 ; CHECK-RV64-NEXT:    j .LBB61_230
 ; CHECK-RV64-NEXT:  .LBB61_744: # %cond.load889
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15936,7 +15936,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_745
 ; CHECK-RV64-NEXT:    j .LBB61_231
 ; CHECK-RV64-NEXT:  .LBB61_745: # %cond.load893
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15951,7 +15951,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_746
 ; CHECK-RV64-NEXT:    j .LBB61_232
 ; CHECK-RV64-NEXT:  .LBB61_746: # %cond.load897
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15966,7 +15966,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_747
 ; CHECK-RV64-NEXT:    j .LBB61_233
 ; CHECK-RV64-NEXT:  .LBB61_747: # %cond.load901
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15981,7 +15981,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_748
 ; CHECK-RV64-NEXT:    j .LBB61_234
 ; CHECK-RV64-NEXT:  .LBB61_748: # %cond.load905
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -15996,7 +15996,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_749
 ; CHECK-RV64-NEXT:    j .LBB61_235
 ; CHECK-RV64-NEXT:  .LBB61_749: # %cond.load909
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16011,7 +16011,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_750
 ; CHECK-RV64-NEXT:    j .LBB61_236
 ; CHECK-RV64-NEXT:  .LBB61_750: # %cond.load913
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16026,7 +16026,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_751
 ; CHECK-RV64-NEXT:    j .LBB61_237
 ; CHECK-RV64-NEXT:  .LBB61_751: # %cond.load917
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16041,7 +16041,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_752
 ; CHECK-RV64-NEXT:    j .LBB61_238
 ; CHECK-RV64-NEXT:  .LBB61_752: # %cond.load921
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16056,7 +16056,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_753
 ; CHECK-RV64-NEXT:    j .LBB61_239
 ; CHECK-RV64-NEXT:  .LBB61_753: # %cond.load925
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16071,7 +16071,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_754
 ; CHECK-RV64-NEXT:    j .LBB61_240
 ; CHECK-RV64-NEXT:  .LBB61_754: # %cond.load929
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16086,7 +16086,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_755
 ; CHECK-RV64-NEXT:    j .LBB61_241
 ; CHECK-RV64-NEXT:  .LBB61_755: # %cond.load933
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16101,7 +16101,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_756
 ; CHECK-RV64-NEXT:    j .LBB61_242
 ; CHECK-RV64-NEXT:  .LBB61_756: # %cond.load937
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16116,7 +16116,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_757
 ; CHECK-RV64-NEXT:    j .LBB61_243
 ; CHECK-RV64-NEXT:  .LBB61_757: # %cond.load941
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16131,7 +16131,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_758
 ; CHECK-RV64-NEXT:    j .LBB61_244
 ; CHECK-RV64-NEXT:  .LBB61_758: # %cond.load945
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16146,7 +16146,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_759
 ; CHECK-RV64-NEXT:    j .LBB61_245
 ; CHECK-RV64-NEXT:  .LBB61_759: # %cond.load949
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16161,7 +16161,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_760
 ; CHECK-RV64-NEXT:    j .LBB61_246
 ; CHECK-RV64-NEXT:  .LBB61_760: # %cond.load953
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16176,7 +16176,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_761
 ; CHECK-RV64-NEXT:    j .LBB61_247
 ; CHECK-RV64-NEXT:  .LBB61_761: # %cond.load957
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16191,7 +16191,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_762
 ; CHECK-RV64-NEXT:    j .LBB61_248
 ; CHECK-RV64-NEXT:  .LBB61_762: # %cond.load961
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16206,7 +16206,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_763
 ; CHECK-RV64-NEXT:    j .LBB61_249
 ; CHECK-RV64-NEXT:  .LBB61_763: # %cond.load965
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16221,7 +16221,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_764
 ; CHECK-RV64-NEXT:    j .LBB61_250
 ; CHECK-RV64-NEXT:  .LBB61_764: # %cond.load969
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16236,7 +16236,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_765
 ; CHECK-RV64-NEXT:    j .LBB61_251
 ; CHECK-RV64-NEXT:  .LBB61_765: # %cond.load973
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16251,7 +16251,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_766
 ; CHECK-RV64-NEXT:    j .LBB61_252
 ; CHECK-RV64-NEXT:  .LBB61_766: # %cond.load977
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16266,7 +16266,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_767
 ; CHECK-RV64-NEXT:    j .LBB61_253
 ; CHECK-RV64-NEXT:  .LBB61_767: # %cond.load981
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16281,7 +16281,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_768
 ; CHECK-RV64-NEXT:    j .LBB61_254
 ; CHECK-RV64-NEXT:  .LBB61_768: # %cond.load985
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16296,7 +16296,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_769
 ; CHECK-RV64-NEXT:    j .LBB61_255
 ; CHECK-RV64-NEXT:  .LBB61_769: # %cond.load989
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16311,7 +16311,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_770
 ; CHECK-RV64-NEXT:    j .LBB61_256
 ; CHECK-RV64-NEXT:  .LBB61_770: # %cond.load993
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16326,7 +16326,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_771
 ; CHECK-RV64-NEXT:    j .LBB61_257
 ; CHECK-RV64-NEXT:  .LBB61_771: # %cond.load997
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16341,7 +16341,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_772
 ; CHECK-RV64-NEXT:    j .LBB61_258
 ; CHECK-RV64-NEXT:  .LBB61_772: # %cond.load1001
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16356,7 +16356,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_773
 ; CHECK-RV64-NEXT:    j .LBB61_259
 ; CHECK-RV64-NEXT:  .LBB61_773: # %cond.load1005
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a2
@@ -16373,7 +16373,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1028: # %cond.load1005
 ; CHECK-RV64-NEXT:    j .LBB61_261
 ; CHECK-RV64-NEXT:  .LBB61_774: # %cond.load1017
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv8r.v v16, v8
 ; CHECK-RV64-NEXT:    vmv.s.x v12, a1
 ; CHECK-RV64-NEXT:    li a1, 256
@@ -16387,7 +16387,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_775
 ; CHECK-RV64-NEXT:    j .LBB61_265
 ; CHECK-RV64-NEXT:  .LBB61_775: # %cond.load1021
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16400,7 +16400,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_776
 ; CHECK-RV64-NEXT:    j .LBB61_266
 ; CHECK-RV64-NEXT:  .LBB61_776: # %cond.load1025
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16413,7 +16413,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_777
 ; CHECK-RV64-NEXT:    j .LBB61_267
 ; CHECK-RV64-NEXT:  .LBB61_777: # %cond.load1029
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16426,7 +16426,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_778
 ; CHECK-RV64-NEXT:    j .LBB61_268
 ; CHECK-RV64-NEXT:  .LBB61_778: # %cond.load1033
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16439,7 +16439,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_779
 ; CHECK-RV64-NEXT:    j .LBB61_269
 ; CHECK-RV64-NEXT:  .LBB61_779: # %cond.load1037
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16452,7 +16452,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_780
 ; CHECK-RV64-NEXT:    j .LBB61_270
 ; CHECK-RV64-NEXT:  .LBB61_780: # %cond.load1041
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16465,7 +16465,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_781
 ; CHECK-RV64-NEXT:    j .LBB61_271
 ; CHECK-RV64-NEXT:  .LBB61_781: # %cond.load1045
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16478,7 +16478,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_782
 ; CHECK-RV64-NEXT:    j .LBB61_272
 ; CHECK-RV64-NEXT:  .LBB61_782: # %cond.load1049
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16491,7 +16491,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_783
 ; CHECK-RV64-NEXT:    j .LBB61_273
 ; CHECK-RV64-NEXT:  .LBB61_783: # %cond.load1053
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16504,7 +16504,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_784
 ; CHECK-RV64-NEXT:    j .LBB61_274
 ; CHECK-RV64-NEXT:  .LBB61_784: # %cond.load1057
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16517,7 +16517,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_785
 ; CHECK-RV64-NEXT:    j .LBB61_275
 ; CHECK-RV64-NEXT:  .LBB61_785: # %cond.load1061
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16530,7 +16530,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_786
 ; CHECK-RV64-NEXT:    j .LBB61_276
 ; CHECK-RV64-NEXT:  .LBB61_786: # %cond.load1065
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16543,7 +16543,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_787
 ; CHECK-RV64-NEXT:    j .LBB61_277
 ; CHECK-RV64-NEXT:  .LBB61_787: # %cond.load1069
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16556,7 +16556,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_788
 ; CHECK-RV64-NEXT:    j .LBB61_278
 ; CHECK-RV64-NEXT:  .LBB61_788: # %cond.load1073
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16569,7 +16569,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_789
 ; CHECK-RV64-NEXT:    j .LBB61_279
 ; CHECK-RV64-NEXT:  .LBB61_789: # %cond.load1077
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16582,7 +16582,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_790
 ; CHECK-RV64-NEXT:    j .LBB61_280
 ; CHECK-RV64-NEXT:  .LBB61_790: # %cond.load1081
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16595,7 +16595,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_791
 ; CHECK-RV64-NEXT:    j .LBB61_281
 ; CHECK-RV64-NEXT:  .LBB61_791: # %cond.load1085
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16608,7 +16608,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_792
 ; CHECK-RV64-NEXT:    j .LBB61_282
 ; CHECK-RV64-NEXT:  .LBB61_792: # %cond.load1089
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16621,7 +16621,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_793
 ; CHECK-RV64-NEXT:    j .LBB61_283
 ; CHECK-RV64-NEXT:  .LBB61_793: # %cond.load1093
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16634,7 +16634,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_794
 ; CHECK-RV64-NEXT:    j .LBB61_284
 ; CHECK-RV64-NEXT:  .LBB61_794: # %cond.load1097
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16647,7 +16647,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_795
 ; CHECK-RV64-NEXT:    j .LBB61_285
 ; CHECK-RV64-NEXT:  .LBB61_795: # %cond.load1101
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16660,7 +16660,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_796
 ; CHECK-RV64-NEXT:    j .LBB61_286
 ; CHECK-RV64-NEXT:  .LBB61_796: # %cond.load1105
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16673,7 +16673,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_797
 ; CHECK-RV64-NEXT:    j .LBB61_287
 ; CHECK-RV64-NEXT:  .LBB61_797: # %cond.load1109
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16686,7 +16686,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_798
 ; CHECK-RV64-NEXT:    j .LBB61_288
 ; CHECK-RV64-NEXT:  .LBB61_798: # %cond.load1113
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16699,7 +16699,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_799
 ; CHECK-RV64-NEXT:    j .LBB61_289
 ; CHECK-RV64-NEXT:  .LBB61_799: # %cond.load1117
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16712,7 +16712,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_800
 ; CHECK-RV64-NEXT:    j .LBB61_290
 ; CHECK-RV64-NEXT:  .LBB61_800: # %cond.load1121
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16725,7 +16725,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_801
 ; CHECK-RV64-NEXT:    j .LBB61_291
 ; CHECK-RV64-NEXT:  .LBB61_801: # %cond.load1125
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16738,7 +16738,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_802
 ; CHECK-RV64-NEXT:    j .LBB61_292
 ; CHECK-RV64-NEXT:  .LBB61_802: # %cond.load1129
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16751,7 +16751,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_803
 ; CHECK-RV64-NEXT:    j .LBB61_293
 ; CHECK-RV64-NEXT:  .LBB61_803: # %cond.load1133
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16764,7 +16764,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_804
 ; CHECK-RV64-NEXT:    j .LBB61_294
 ; CHECK-RV64-NEXT:  .LBB61_804: # %cond.load1137
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16777,7 +16777,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_805
 ; CHECK-RV64-NEXT:    j .LBB61_295
 ; CHECK-RV64-NEXT:  .LBB61_805: # %cond.load1141
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16790,7 +16790,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_806
 ; CHECK-RV64-NEXT:    j .LBB61_296
 ; CHECK-RV64-NEXT:  .LBB61_806: # %cond.load1145
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16803,7 +16803,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_807
 ; CHECK-RV64-NEXT:    j .LBB61_297
 ; CHECK-RV64-NEXT:  .LBB61_807: # %cond.load1149
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16816,7 +16816,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_808
 ; CHECK-RV64-NEXT:    j .LBB61_298
 ; CHECK-RV64-NEXT:  .LBB61_808: # %cond.load1153
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16829,7 +16829,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_809
 ; CHECK-RV64-NEXT:    j .LBB61_299
 ; CHECK-RV64-NEXT:  .LBB61_809: # %cond.load1157
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16842,7 +16842,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_810
 ; CHECK-RV64-NEXT:    j .LBB61_300
 ; CHECK-RV64-NEXT:  .LBB61_810: # %cond.load1161
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16855,7 +16855,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_811
 ; CHECK-RV64-NEXT:    j .LBB61_301
 ; CHECK-RV64-NEXT:  .LBB61_811: # %cond.load1165
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16868,7 +16868,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_812
 ; CHECK-RV64-NEXT:    j .LBB61_302
 ; CHECK-RV64-NEXT:  .LBB61_812: # %cond.load1169
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16881,7 +16881,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_813
 ; CHECK-RV64-NEXT:    j .LBB61_303
 ; CHECK-RV64-NEXT:  .LBB61_813: # %cond.load1173
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16894,7 +16894,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_814
 ; CHECK-RV64-NEXT:    j .LBB61_304
 ; CHECK-RV64-NEXT:  .LBB61_814: # %cond.load1177
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16907,7 +16907,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_815
 ; CHECK-RV64-NEXT:    j .LBB61_305
 ; CHECK-RV64-NEXT:  .LBB61_815: # %cond.load1181
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16920,7 +16920,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_816
 ; CHECK-RV64-NEXT:    j .LBB61_306
 ; CHECK-RV64-NEXT:  .LBB61_816: # %cond.load1185
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16933,7 +16933,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_817
 ; CHECK-RV64-NEXT:    j .LBB61_307
 ; CHECK-RV64-NEXT:  .LBB61_817: # %cond.load1189
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16946,7 +16946,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_818
 ; CHECK-RV64-NEXT:    j .LBB61_308
 ; CHECK-RV64-NEXT:  .LBB61_818: # %cond.load1193
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16959,7 +16959,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_819
 ; CHECK-RV64-NEXT:    j .LBB61_309
 ; CHECK-RV64-NEXT:  .LBB61_819: # %cond.load1197
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16972,7 +16972,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_820
 ; CHECK-RV64-NEXT:    j .LBB61_310
 ; CHECK-RV64-NEXT:  .LBB61_820: # %cond.load1201
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16985,7 +16985,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_821
 ; CHECK-RV64-NEXT:    j .LBB61_311
 ; CHECK-RV64-NEXT:  .LBB61_821: # %cond.load1205
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -16998,7 +16998,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_822
 ; CHECK-RV64-NEXT:    j .LBB61_312
 ; CHECK-RV64-NEXT:  .LBB61_822: # %cond.load1209
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17011,7 +17011,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_823
 ; CHECK-RV64-NEXT:    j .LBB61_313
 ; CHECK-RV64-NEXT:  .LBB61_823: # %cond.load1213
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17024,7 +17024,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_824
 ; CHECK-RV64-NEXT:    j .LBB61_314
 ; CHECK-RV64-NEXT:  .LBB61_824: # %cond.load1217
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17037,7 +17037,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_825
 ; CHECK-RV64-NEXT:    j .LBB61_315
 ; CHECK-RV64-NEXT:  .LBB61_825: # %cond.load1221
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17050,7 +17050,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_826
 ; CHECK-RV64-NEXT:    j .LBB61_316
 ; CHECK-RV64-NEXT:  .LBB61_826: # %cond.load1225
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17063,7 +17063,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_827
 ; CHECK-RV64-NEXT:    j .LBB61_317
 ; CHECK-RV64-NEXT:  .LBB61_827: # %cond.load1229
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17076,7 +17076,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_828
 ; CHECK-RV64-NEXT:    j .LBB61_318
 ; CHECK-RV64-NEXT:  .LBB61_828: # %cond.load1233
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17089,7 +17089,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_829
 ; CHECK-RV64-NEXT:    j .LBB61_319
 ; CHECK-RV64-NEXT:  .LBB61_829: # %cond.load1237
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17102,7 +17102,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_830
 ; CHECK-RV64-NEXT:    j .LBB61_320
 ; CHECK-RV64-NEXT:  .LBB61_830: # %cond.load1241
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17115,7 +17115,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_831
 ; CHECK-RV64-NEXT:    j .LBB61_321
 ; CHECK-RV64-NEXT:  .LBB61_831: # %cond.load1245
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17128,7 +17128,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_832
 ; CHECK-RV64-NEXT:    j .LBB61_322
 ; CHECK-RV64-NEXT:  .LBB61_832: # %cond.load1249
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17141,7 +17141,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_833
 ; CHECK-RV64-NEXT:    j .LBB61_323
 ; CHECK-RV64-NEXT:  .LBB61_833: # %cond.load1253
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17154,7 +17154,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_834
 ; CHECK-RV64-NEXT:    j .LBB61_324
 ; CHECK-RV64-NEXT:  .LBB61_834: # %cond.load1257
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17167,7 +17167,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_835
 ; CHECK-RV64-NEXT:    j .LBB61_325
 ; CHECK-RV64-NEXT:  .LBB61_835: # %cond.load1261
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -17182,7 +17182,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1029: # %cond.load1261
 ; CHECK-RV64-NEXT:    j .LBB61_327
 ; CHECK-RV64-NEXT:  .LBB61_836: # %cond.load1273
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
 ; CHECK-RV64-NEXT:    li a2, 320
 ; CHECK-RV64-NEXT:    li a3, 319
@@ -17193,7 +17193,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_837
 ; CHECK-RV64-NEXT:    j .LBB61_331
 ; CHECK-RV64-NEXT:  .LBB61_837: # %cond.load1277
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17206,7 +17206,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_838
 ; CHECK-RV64-NEXT:    j .LBB61_332
 ; CHECK-RV64-NEXT:  .LBB61_838: # %cond.load1281
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17219,7 +17219,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_839
 ; CHECK-RV64-NEXT:    j .LBB61_333
 ; CHECK-RV64-NEXT:  .LBB61_839: # %cond.load1285
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17232,7 +17232,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_840
 ; CHECK-RV64-NEXT:    j .LBB61_334
 ; CHECK-RV64-NEXT:  .LBB61_840: # %cond.load1289
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17245,7 +17245,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_841
 ; CHECK-RV64-NEXT:    j .LBB61_335
 ; CHECK-RV64-NEXT:  .LBB61_841: # %cond.load1293
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17258,7 +17258,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_842
 ; CHECK-RV64-NEXT:    j .LBB61_336
 ; CHECK-RV64-NEXT:  .LBB61_842: # %cond.load1297
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17271,7 +17271,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_843
 ; CHECK-RV64-NEXT:    j .LBB61_337
 ; CHECK-RV64-NEXT:  .LBB61_843: # %cond.load1301
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17284,7 +17284,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_844
 ; CHECK-RV64-NEXT:    j .LBB61_338
 ; CHECK-RV64-NEXT:  .LBB61_844: # %cond.load1305
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17297,7 +17297,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_845
 ; CHECK-RV64-NEXT:    j .LBB61_339
 ; CHECK-RV64-NEXT:  .LBB61_845: # %cond.load1309
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17310,7 +17310,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_846
 ; CHECK-RV64-NEXT:    j .LBB61_340
 ; CHECK-RV64-NEXT:  .LBB61_846: # %cond.load1313
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17323,7 +17323,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_847
 ; CHECK-RV64-NEXT:    j .LBB61_341
 ; CHECK-RV64-NEXT:  .LBB61_847: # %cond.load1317
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17336,7 +17336,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_848
 ; CHECK-RV64-NEXT:    j .LBB61_342
 ; CHECK-RV64-NEXT:  .LBB61_848: # %cond.load1321
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17349,7 +17349,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_849
 ; CHECK-RV64-NEXT:    j .LBB61_343
 ; CHECK-RV64-NEXT:  .LBB61_849: # %cond.load1325
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17362,7 +17362,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_850
 ; CHECK-RV64-NEXT:    j .LBB61_344
 ; CHECK-RV64-NEXT:  .LBB61_850: # %cond.load1329
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17375,7 +17375,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_851
 ; CHECK-RV64-NEXT:    j .LBB61_345
 ; CHECK-RV64-NEXT:  .LBB61_851: # %cond.load1333
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17388,7 +17388,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_852
 ; CHECK-RV64-NEXT:    j .LBB61_346
 ; CHECK-RV64-NEXT:  .LBB61_852: # %cond.load1337
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17401,7 +17401,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_853
 ; CHECK-RV64-NEXT:    j .LBB61_347
 ; CHECK-RV64-NEXT:  .LBB61_853: # %cond.load1341
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17414,7 +17414,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_854
 ; CHECK-RV64-NEXT:    j .LBB61_348
 ; CHECK-RV64-NEXT:  .LBB61_854: # %cond.load1345
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17427,7 +17427,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_855
 ; CHECK-RV64-NEXT:    j .LBB61_349
 ; CHECK-RV64-NEXT:  .LBB61_855: # %cond.load1349
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17440,7 +17440,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_856
 ; CHECK-RV64-NEXT:    j .LBB61_350
 ; CHECK-RV64-NEXT:  .LBB61_856: # %cond.load1353
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17453,7 +17453,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_857
 ; CHECK-RV64-NEXT:    j .LBB61_351
 ; CHECK-RV64-NEXT:  .LBB61_857: # %cond.load1357
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17466,7 +17466,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_858
 ; CHECK-RV64-NEXT:    j .LBB61_352
 ; CHECK-RV64-NEXT:  .LBB61_858: # %cond.load1361
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17479,7 +17479,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_859
 ; CHECK-RV64-NEXT:    j .LBB61_353
 ; CHECK-RV64-NEXT:  .LBB61_859: # %cond.load1365
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17492,7 +17492,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_860
 ; CHECK-RV64-NEXT:    j .LBB61_354
 ; CHECK-RV64-NEXT:  .LBB61_860: # %cond.load1369
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17505,7 +17505,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_861
 ; CHECK-RV64-NEXT:    j .LBB61_355
 ; CHECK-RV64-NEXT:  .LBB61_861: # %cond.load1373
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17518,7 +17518,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_862
 ; CHECK-RV64-NEXT:    j .LBB61_356
 ; CHECK-RV64-NEXT:  .LBB61_862: # %cond.load1377
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17531,7 +17531,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_863
 ; CHECK-RV64-NEXT:    j .LBB61_357
 ; CHECK-RV64-NEXT:  .LBB61_863: # %cond.load1381
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17544,7 +17544,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_864
 ; CHECK-RV64-NEXT:    j .LBB61_358
 ; CHECK-RV64-NEXT:  .LBB61_864: # %cond.load1385
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17557,7 +17557,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_865
 ; CHECK-RV64-NEXT:    j .LBB61_359
 ; CHECK-RV64-NEXT:  .LBB61_865: # %cond.load1389
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17570,7 +17570,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_866
 ; CHECK-RV64-NEXT:    j .LBB61_360
 ; CHECK-RV64-NEXT:  .LBB61_866: # %cond.load1393
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17583,7 +17583,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_867
 ; CHECK-RV64-NEXT:    j .LBB61_361
 ; CHECK-RV64-NEXT:  .LBB61_867: # %cond.load1397
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17596,7 +17596,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_868
 ; CHECK-RV64-NEXT:    j .LBB61_362
 ; CHECK-RV64-NEXT:  .LBB61_868: # %cond.load1401
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17609,7 +17609,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_869
 ; CHECK-RV64-NEXT:    j .LBB61_363
 ; CHECK-RV64-NEXT:  .LBB61_869: # %cond.load1405
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17622,7 +17622,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_870
 ; CHECK-RV64-NEXT:    j .LBB61_364
 ; CHECK-RV64-NEXT:  .LBB61_870: # %cond.load1409
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17635,7 +17635,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_871
 ; CHECK-RV64-NEXT:    j .LBB61_365
 ; CHECK-RV64-NEXT:  .LBB61_871: # %cond.load1413
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17648,7 +17648,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_872
 ; CHECK-RV64-NEXT:    j .LBB61_366
 ; CHECK-RV64-NEXT:  .LBB61_872: # %cond.load1417
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17661,7 +17661,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_873
 ; CHECK-RV64-NEXT:    j .LBB61_367
 ; CHECK-RV64-NEXT:  .LBB61_873: # %cond.load1421
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17674,7 +17674,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_874
 ; CHECK-RV64-NEXT:    j .LBB61_368
 ; CHECK-RV64-NEXT:  .LBB61_874: # %cond.load1425
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17687,7 +17687,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_875
 ; CHECK-RV64-NEXT:    j .LBB61_369
 ; CHECK-RV64-NEXT:  .LBB61_875: # %cond.load1429
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17700,7 +17700,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_876
 ; CHECK-RV64-NEXT:    j .LBB61_370
 ; CHECK-RV64-NEXT:  .LBB61_876: # %cond.load1433
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17713,7 +17713,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_877
 ; CHECK-RV64-NEXT:    j .LBB61_371
 ; CHECK-RV64-NEXT:  .LBB61_877: # %cond.load1437
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17726,7 +17726,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_878
 ; CHECK-RV64-NEXT:    j .LBB61_372
 ; CHECK-RV64-NEXT:  .LBB61_878: # %cond.load1441
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17739,7 +17739,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_879
 ; CHECK-RV64-NEXT:    j .LBB61_373
 ; CHECK-RV64-NEXT:  .LBB61_879: # %cond.load1445
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17752,7 +17752,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_880
 ; CHECK-RV64-NEXT:    j .LBB61_374
 ; CHECK-RV64-NEXT:  .LBB61_880: # %cond.load1449
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17765,7 +17765,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_881
 ; CHECK-RV64-NEXT:    j .LBB61_375
 ; CHECK-RV64-NEXT:  .LBB61_881: # %cond.load1453
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17778,7 +17778,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_882
 ; CHECK-RV64-NEXT:    j .LBB61_376
 ; CHECK-RV64-NEXT:  .LBB61_882: # %cond.load1457
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17791,7 +17791,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_883
 ; CHECK-RV64-NEXT:    j .LBB61_377
 ; CHECK-RV64-NEXT:  .LBB61_883: # %cond.load1461
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17804,7 +17804,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_884
 ; CHECK-RV64-NEXT:    j .LBB61_378
 ; CHECK-RV64-NEXT:  .LBB61_884: # %cond.load1465
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17817,7 +17817,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_885
 ; CHECK-RV64-NEXT:    j .LBB61_379
 ; CHECK-RV64-NEXT:  .LBB61_885: # %cond.load1469
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17830,7 +17830,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_886
 ; CHECK-RV64-NEXT:    j .LBB61_380
 ; CHECK-RV64-NEXT:  .LBB61_886: # %cond.load1473
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17843,7 +17843,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_887
 ; CHECK-RV64-NEXT:    j .LBB61_381
 ; CHECK-RV64-NEXT:  .LBB61_887: # %cond.load1477
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17856,7 +17856,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_888
 ; CHECK-RV64-NEXT:    j .LBB61_382
 ; CHECK-RV64-NEXT:  .LBB61_888: # %cond.load1481
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17869,7 +17869,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_889
 ; CHECK-RV64-NEXT:    j .LBB61_383
 ; CHECK-RV64-NEXT:  .LBB61_889: # %cond.load1485
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17882,7 +17882,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_890
 ; CHECK-RV64-NEXT:    j .LBB61_384
 ; CHECK-RV64-NEXT:  .LBB61_890: # %cond.load1489
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17895,7 +17895,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_891
 ; CHECK-RV64-NEXT:    j .LBB61_385
 ; CHECK-RV64-NEXT:  .LBB61_891: # %cond.load1493
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17908,7 +17908,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_892
 ; CHECK-RV64-NEXT:    j .LBB61_386
 ; CHECK-RV64-NEXT:  .LBB61_892: # %cond.load1497
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17921,7 +17921,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_893
 ; CHECK-RV64-NEXT:    j .LBB61_387
 ; CHECK-RV64-NEXT:  .LBB61_893: # %cond.load1501
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17934,7 +17934,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_894
 ; CHECK-RV64-NEXT:    j .LBB61_388
 ; CHECK-RV64-NEXT:  .LBB61_894: # %cond.load1505
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17947,7 +17947,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_895
 ; CHECK-RV64-NEXT:    j .LBB61_389
 ; CHECK-RV64-NEXT:  .LBB61_895: # %cond.load1509
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17960,7 +17960,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_896
 ; CHECK-RV64-NEXT:    j .LBB61_390
 ; CHECK-RV64-NEXT:  .LBB61_896: # %cond.load1513
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17973,7 +17973,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_897
 ; CHECK-RV64-NEXT:    j .LBB61_391
 ; CHECK-RV64-NEXT:  .LBB61_897: # %cond.load1517
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -17988,7 +17988,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1030: # %cond.load1517
 ; CHECK-RV64-NEXT:    j .LBB61_393
 ; CHECK-RV64-NEXT:  .LBB61_898: # %cond.load1529
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
 ; CHECK-RV64-NEXT:    li a1, 384
 ; CHECK-RV64-NEXT:    li a3, 383
@@ -17999,7 +17999,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_899
 ; CHECK-RV64-NEXT:    j .LBB61_397
 ; CHECK-RV64-NEXT:  .LBB61_899: # %cond.load1533
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18012,7 +18012,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_900
 ; CHECK-RV64-NEXT:    j .LBB61_398
 ; CHECK-RV64-NEXT:  .LBB61_900: # %cond.load1537
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18025,7 +18025,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_901
 ; CHECK-RV64-NEXT:    j .LBB61_399
 ; CHECK-RV64-NEXT:  .LBB61_901: # %cond.load1541
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18038,7 +18038,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_902
 ; CHECK-RV64-NEXT:    j .LBB61_400
 ; CHECK-RV64-NEXT:  .LBB61_902: # %cond.load1545
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18051,7 +18051,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_903
 ; CHECK-RV64-NEXT:    j .LBB61_401
 ; CHECK-RV64-NEXT:  .LBB61_903: # %cond.load1549
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18064,7 +18064,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_904
 ; CHECK-RV64-NEXT:    j .LBB61_402
 ; CHECK-RV64-NEXT:  .LBB61_904: # %cond.load1553
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18077,7 +18077,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_905
 ; CHECK-RV64-NEXT:    j .LBB61_403
 ; CHECK-RV64-NEXT:  .LBB61_905: # %cond.load1557
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18090,7 +18090,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_906
 ; CHECK-RV64-NEXT:    j .LBB61_404
 ; CHECK-RV64-NEXT:  .LBB61_906: # %cond.load1561
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18103,7 +18103,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_907
 ; CHECK-RV64-NEXT:    j .LBB61_405
 ; CHECK-RV64-NEXT:  .LBB61_907: # %cond.load1565
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18116,7 +18116,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_908
 ; CHECK-RV64-NEXT:    j .LBB61_406
 ; CHECK-RV64-NEXT:  .LBB61_908: # %cond.load1569
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18129,7 +18129,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a1, .LBB61_909
 ; CHECK-RV64-NEXT:    j .LBB61_407
 ; CHECK-RV64-NEXT:  .LBB61_909: # %cond.load1573
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18142,7 +18142,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_910
 ; CHECK-RV64-NEXT:    j .LBB61_408
 ; CHECK-RV64-NEXT:  .LBB61_910: # %cond.load1577
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18155,7 +18155,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_911
 ; CHECK-RV64-NEXT:    j .LBB61_409
 ; CHECK-RV64-NEXT:  .LBB61_911: # %cond.load1581
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18168,7 +18168,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_912
 ; CHECK-RV64-NEXT:    j .LBB61_410
 ; CHECK-RV64-NEXT:  .LBB61_912: # %cond.load1585
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18181,7 +18181,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_913
 ; CHECK-RV64-NEXT:    j .LBB61_411
 ; CHECK-RV64-NEXT:  .LBB61_913: # %cond.load1589
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18194,7 +18194,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_914
 ; CHECK-RV64-NEXT:    j .LBB61_412
 ; CHECK-RV64-NEXT:  .LBB61_914: # %cond.load1593
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18207,7 +18207,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_915
 ; CHECK-RV64-NEXT:    j .LBB61_413
 ; CHECK-RV64-NEXT:  .LBB61_915: # %cond.load1597
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18220,7 +18220,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_916
 ; CHECK-RV64-NEXT:    j .LBB61_414
 ; CHECK-RV64-NEXT:  .LBB61_916: # %cond.load1601
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18233,7 +18233,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_917
 ; CHECK-RV64-NEXT:    j .LBB61_415
 ; CHECK-RV64-NEXT:  .LBB61_917: # %cond.load1605
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18246,7 +18246,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_918
 ; CHECK-RV64-NEXT:    j .LBB61_416
 ; CHECK-RV64-NEXT:  .LBB61_918: # %cond.load1609
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18259,7 +18259,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_919
 ; CHECK-RV64-NEXT:    j .LBB61_417
 ; CHECK-RV64-NEXT:  .LBB61_919: # %cond.load1613
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18272,7 +18272,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_920
 ; CHECK-RV64-NEXT:    j .LBB61_418
 ; CHECK-RV64-NEXT:  .LBB61_920: # %cond.load1617
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18285,7 +18285,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_921
 ; CHECK-RV64-NEXT:    j .LBB61_419
 ; CHECK-RV64-NEXT:  .LBB61_921: # %cond.load1621
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18298,7 +18298,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_922
 ; CHECK-RV64-NEXT:    j .LBB61_420
 ; CHECK-RV64-NEXT:  .LBB61_922: # %cond.load1625
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18311,7 +18311,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_923
 ; CHECK-RV64-NEXT:    j .LBB61_421
 ; CHECK-RV64-NEXT:  .LBB61_923: # %cond.load1629
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18324,7 +18324,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_924
 ; CHECK-RV64-NEXT:    j .LBB61_422
 ; CHECK-RV64-NEXT:  .LBB61_924: # %cond.load1633
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18337,7 +18337,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_925
 ; CHECK-RV64-NEXT:    j .LBB61_423
 ; CHECK-RV64-NEXT:  .LBB61_925: # %cond.load1637
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18350,7 +18350,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_926
 ; CHECK-RV64-NEXT:    j .LBB61_424
 ; CHECK-RV64-NEXT:  .LBB61_926: # %cond.load1641
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18363,7 +18363,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_927
 ; CHECK-RV64-NEXT:    j .LBB61_425
 ; CHECK-RV64-NEXT:  .LBB61_927: # %cond.load1645
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18376,7 +18376,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_928
 ; CHECK-RV64-NEXT:    j .LBB61_426
 ; CHECK-RV64-NEXT:  .LBB61_928: # %cond.load1649
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18389,7 +18389,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_929
 ; CHECK-RV64-NEXT:    j .LBB61_427
 ; CHECK-RV64-NEXT:  .LBB61_929: # %cond.load1653
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18402,7 +18402,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_930
 ; CHECK-RV64-NEXT:    j .LBB61_428
 ; CHECK-RV64-NEXT:  .LBB61_930: # %cond.load1657
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18415,7 +18415,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_931
 ; CHECK-RV64-NEXT:    j .LBB61_429
 ; CHECK-RV64-NEXT:  .LBB61_931: # %cond.load1661
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18428,7 +18428,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_932
 ; CHECK-RV64-NEXT:    j .LBB61_430
 ; CHECK-RV64-NEXT:  .LBB61_932: # %cond.load1665
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18441,7 +18441,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_933
 ; CHECK-RV64-NEXT:    j .LBB61_431
 ; CHECK-RV64-NEXT:  .LBB61_933: # %cond.load1669
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18454,7 +18454,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_934
 ; CHECK-RV64-NEXT:    j .LBB61_432
 ; CHECK-RV64-NEXT:  .LBB61_934: # %cond.load1673
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18467,7 +18467,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_935
 ; CHECK-RV64-NEXT:    j .LBB61_433
 ; CHECK-RV64-NEXT:  .LBB61_935: # %cond.load1677
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18480,7 +18480,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_936
 ; CHECK-RV64-NEXT:    j .LBB61_434
 ; CHECK-RV64-NEXT:  .LBB61_936: # %cond.load1681
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18493,7 +18493,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_937
 ; CHECK-RV64-NEXT:    j .LBB61_435
 ; CHECK-RV64-NEXT:  .LBB61_937: # %cond.load1685
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18506,7 +18506,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_938
 ; CHECK-RV64-NEXT:    j .LBB61_436
 ; CHECK-RV64-NEXT:  .LBB61_938: # %cond.load1689
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18519,7 +18519,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_939
 ; CHECK-RV64-NEXT:    j .LBB61_437
 ; CHECK-RV64-NEXT:  .LBB61_939: # %cond.load1693
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18532,7 +18532,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_940
 ; CHECK-RV64-NEXT:    j .LBB61_438
 ; CHECK-RV64-NEXT:  .LBB61_940: # %cond.load1697
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18545,7 +18545,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_941
 ; CHECK-RV64-NEXT:    j .LBB61_439
 ; CHECK-RV64-NEXT:  .LBB61_941: # %cond.load1701
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18558,7 +18558,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_942
 ; CHECK-RV64-NEXT:    j .LBB61_440
 ; CHECK-RV64-NEXT:  .LBB61_942: # %cond.load1705
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18571,7 +18571,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_943
 ; CHECK-RV64-NEXT:    j .LBB61_441
 ; CHECK-RV64-NEXT:  .LBB61_943: # %cond.load1709
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18584,7 +18584,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_944
 ; CHECK-RV64-NEXT:    j .LBB61_442
 ; CHECK-RV64-NEXT:  .LBB61_944: # %cond.load1713
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18597,7 +18597,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_945
 ; CHECK-RV64-NEXT:    j .LBB61_443
 ; CHECK-RV64-NEXT:  .LBB61_945: # %cond.load1717
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18610,7 +18610,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_946
 ; CHECK-RV64-NEXT:    j .LBB61_444
 ; CHECK-RV64-NEXT:  .LBB61_946: # %cond.load1721
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18623,7 +18623,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_947
 ; CHECK-RV64-NEXT:    j .LBB61_445
 ; CHECK-RV64-NEXT:  .LBB61_947: # %cond.load1725
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18636,7 +18636,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_948
 ; CHECK-RV64-NEXT:    j .LBB61_446
 ; CHECK-RV64-NEXT:  .LBB61_948: # %cond.load1729
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18649,7 +18649,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_949
 ; CHECK-RV64-NEXT:    j .LBB61_447
 ; CHECK-RV64-NEXT:  .LBB61_949: # %cond.load1733
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18662,7 +18662,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_950
 ; CHECK-RV64-NEXT:    j .LBB61_448
 ; CHECK-RV64-NEXT:  .LBB61_950: # %cond.load1737
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18675,7 +18675,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_951
 ; CHECK-RV64-NEXT:    j .LBB61_449
 ; CHECK-RV64-NEXT:  .LBB61_951: # %cond.load1741
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18688,7 +18688,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_952
 ; CHECK-RV64-NEXT:    j .LBB61_450
 ; CHECK-RV64-NEXT:  .LBB61_952: # %cond.load1745
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18701,7 +18701,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_953
 ; CHECK-RV64-NEXT:    j .LBB61_451
 ; CHECK-RV64-NEXT:  .LBB61_953: # %cond.load1749
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18714,7 +18714,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_954
 ; CHECK-RV64-NEXT:    j .LBB61_452
 ; CHECK-RV64-NEXT:  .LBB61_954: # %cond.load1753
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18727,7 +18727,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_955
 ; CHECK-RV64-NEXT:    j .LBB61_453
 ; CHECK-RV64-NEXT:  .LBB61_955: # %cond.load1757
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18740,7 +18740,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_956
 ; CHECK-RV64-NEXT:    j .LBB61_454
 ; CHECK-RV64-NEXT:  .LBB61_956: # %cond.load1761
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18753,7 +18753,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_957
 ; CHECK-RV64-NEXT:    j .LBB61_455
 ; CHECK-RV64-NEXT:  .LBB61_957: # %cond.load1765
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18766,7 +18766,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_958
 ; CHECK-RV64-NEXT:    j .LBB61_456
 ; CHECK-RV64-NEXT:  .LBB61_958: # %cond.load1769
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18779,7 +18779,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_959
 ; CHECK-RV64-NEXT:    j .LBB61_457
 ; CHECK-RV64-NEXT:  .LBB61_959: # %cond.load1773
-; CHECK-RV64-NEXT:    lbu a1, 0(a0)
+; CHECK-RV64-NEXT:    lb a1, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a1
@@ -18794,7 +18794,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:  .LBB61_1031: # %cond.load1773
 ; CHECK-RV64-NEXT:    j .LBB61_459
 ; CHECK-RV64-NEXT:  .LBB61_960: # %cond.load1785
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
 ; CHECK-RV64-NEXT:    li a2, 448
 ; CHECK-RV64-NEXT:    li a3, 447
@@ -18805,7 +18805,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_961
 ; CHECK-RV64-NEXT:    j .LBB61_463
 ; CHECK-RV64-NEXT:  .LBB61_961: # %cond.load1789
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18818,7 +18818,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_962
 ; CHECK-RV64-NEXT:    j .LBB61_464
 ; CHECK-RV64-NEXT:  .LBB61_962: # %cond.load1793
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18831,7 +18831,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_963
 ; CHECK-RV64-NEXT:    j .LBB61_465
 ; CHECK-RV64-NEXT:  .LBB61_963: # %cond.load1797
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18844,7 +18844,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_964
 ; CHECK-RV64-NEXT:    j .LBB61_466
 ; CHECK-RV64-NEXT:  .LBB61_964: # %cond.load1801
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18857,7 +18857,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_965
 ; CHECK-RV64-NEXT:    j .LBB61_467
 ; CHECK-RV64-NEXT:  .LBB61_965: # %cond.load1805
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18870,7 +18870,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_966
 ; CHECK-RV64-NEXT:    j .LBB61_468
 ; CHECK-RV64-NEXT:  .LBB61_966: # %cond.load1809
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18883,7 +18883,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_967
 ; CHECK-RV64-NEXT:    j .LBB61_469
 ; CHECK-RV64-NEXT:  .LBB61_967: # %cond.load1813
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18896,7 +18896,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_968
 ; CHECK-RV64-NEXT:    j .LBB61_470
 ; CHECK-RV64-NEXT:  .LBB61_968: # %cond.load1817
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18909,7 +18909,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_969
 ; CHECK-RV64-NEXT:    j .LBB61_471
 ; CHECK-RV64-NEXT:  .LBB61_969: # %cond.load1821
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18922,7 +18922,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_970
 ; CHECK-RV64-NEXT:    j .LBB61_472
 ; CHECK-RV64-NEXT:  .LBB61_970: # %cond.load1825
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18935,7 +18935,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bnez a2, .LBB61_971
 ; CHECK-RV64-NEXT:    j .LBB61_473
 ; CHECK-RV64-NEXT:  .LBB61_971: # %cond.load1829
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18948,7 +18948,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_972
 ; CHECK-RV64-NEXT:    j .LBB61_474
 ; CHECK-RV64-NEXT:  .LBB61_972: # %cond.load1833
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18961,7 +18961,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_973
 ; CHECK-RV64-NEXT:    j .LBB61_475
 ; CHECK-RV64-NEXT:  .LBB61_973: # %cond.load1837
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18974,7 +18974,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_974
 ; CHECK-RV64-NEXT:    j .LBB61_476
 ; CHECK-RV64-NEXT:  .LBB61_974: # %cond.load1841
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -18987,7 +18987,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_975
 ; CHECK-RV64-NEXT:    j .LBB61_477
 ; CHECK-RV64-NEXT:  .LBB61_975: # %cond.load1845
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19000,7 +19000,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_976
 ; CHECK-RV64-NEXT:    j .LBB61_478
 ; CHECK-RV64-NEXT:  .LBB61_976: # %cond.load1849
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19013,7 +19013,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_977
 ; CHECK-RV64-NEXT:    j .LBB61_479
 ; CHECK-RV64-NEXT:  .LBB61_977: # %cond.load1853
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19026,7 +19026,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_978
 ; CHECK-RV64-NEXT:    j .LBB61_480
 ; CHECK-RV64-NEXT:  .LBB61_978: # %cond.load1857
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19039,7 +19039,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_979
 ; CHECK-RV64-NEXT:    j .LBB61_481
 ; CHECK-RV64-NEXT:  .LBB61_979: # %cond.load1861
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19052,7 +19052,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_980
 ; CHECK-RV64-NEXT:    j .LBB61_482
 ; CHECK-RV64-NEXT:  .LBB61_980: # %cond.load1865
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19065,7 +19065,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_981
 ; CHECK-RV64-NEXT:    j .LBB61_483
 ; CHECK-RV64-NEXT:  .LBB61_981: # %cond.load1869
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19078,7 +19078,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_982
 ; CHECK-RV64-NEXT:    j .LBB61_484
 ; CHECK-RV64-NEXT:  .LBB61_982: # %cond.load1873
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19091,7 +19091,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_983
 ; CHECK-RV64-NEXT:    j .LBB61_485
 ; CHECK-RV64-NEXT:  .LBB61_983: # %cond.load1877
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19104,7 +19104,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_984
 ; CHECK-RV64-NEXT:    j .LBB61_486
 ; CHECK-RV64-NEXT:  .LBB61_984: # %cond.load1881
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19117,7 +19117,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_985
 ; CHECK-RV64-NEXT:    j .LBB61_487
 ; CHECK-RV64-NEXT:  .LBB61_985: # %cond.load1885
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19130,7 +19130,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_986
 ; CHECK-RV64-NEXT:    j .LBB61_488
 ; CHECK-RV64-NEXT:  .LBB61_986: # %cond.load1889
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19143,7 +19143,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_987
 ; CHECK-RV64-NEXT:    j .LBB61_489
 ; CHECK-RV64-NEXT:  .LBB61_987: # %cond.load1893
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19156,7 +19156,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_988
 ; CHECK-RV64-NEXT:    j .LBB61_490
 ; CHECK-RV64-NEXT:  .LBB61_988: # %cond.load1897
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19169,7 +19169,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_989
 ; CHECK-RV64-NEXT:    j .LBB61_491
 ; CHECK-RV64-NEXT:  .LBB61_989: # %cond.load1901
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19182,7 +19182,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_990
 ; CHECK-RV64-NEXT:    j .LBB61_492
 ; CHECK-RV64-NEXT:  .LBB61_990: # %cond.load1905
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19195,7 +19195,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_991
 ; CHECK-RV64-NEXT:    j .LBB61_493
 ; CHECK-RV64-NEXT:  .LBB61_991: # %cond.load1909
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19208,7 +19208,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_992
 ; CHECK-RV64-NEXT:    j .LBB61_494
 ; CHECK-RV64-NEXT:  .LBB61_992: # %cond.load1913
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19221,7 +19221,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_993
 ; CHECK-RV64-NEXT:    j .LBB61_495
 ; CHECK-RV64-NEXT:  .LBB61_993: # %cond.load1917
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19234,7 +19234,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_994
 ; CHECK-RV64-NEXT:    j .LBB61_496
 ; CHECK-RV64-NEXT:  .LBB61_994: # %cond.load1921
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19247,7 +19247,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_995
 ; CHECK-RV64-NEXT:    j .LBB61_497
 ; CHECK-RV64-NEXT:  .LBB61_995: # %cond.load1925
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19260,7 +19260,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_996
 ; CHECK-RV64-NEXT:    j .LBB61_498
 ; CHECK-RV64-NEXT:  .LBB61_996: # %cond.load1929
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19273,7 +19273,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_997
 ; CHECK-RV64-NEXT:    j .LBB61_499
 ; CHECK-RV64-NEXT:  .LBB61_997: # %cond.load1933
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19286,7 +19286,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_998
 ; CHECK-RV64-NEXT:    j .LBB61_500
 ; CHECK-RV64-NEXT:  .LBB61_998: # %cond.load1937
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19299,7 +19299,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_999
 ; CHECK-RV64-NEXT:    j .LBB61_501
 ; CHECK-RV64-NEXT:  .LBB61_999: # %cond.load1941
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19312,7 +19312,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1000
 ; CHECK-RV64-NEXT:    j .LBB61_502
 ; CHECK-RV64-NEXT:  .LBB61_1000: # %cond.load1945
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19325,7 +19325,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1001
 ; CHECK-RV64-NEXT:    j .LBB61_503
 ; CHECK-RV64-NEXT:  .LBB61_1001: # %cond.load1949
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19338,7 +19338,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1002
 ; CHECK-RV64-NEXT:    j .LBB61_504
 ; CHECK-RV64-NEXT:  .LBB61_1002: # %cond.load1953
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19351,7 +19351,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1003
 ; CHECK-RV64-NEXT:    j .LBB61_505
 ; CHECK-RV64-NEXT:  .LBB61_1003: # %cond.load1957
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19364,7 +19364,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1004
 ; CHECK-RV64-NEXT:    j .LBB61_506
 ; CHECK-RV64-NEXT:  .LBB61_1004: # %cond.load1961
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19377,7 +19377,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1005
 ; CHECK-RV64-NEXT:    j .LBB61_507
 ; CHECK-RV64-NEXT:  .LBB61_1005: # %cond.load1965
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19390,7 +19390,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1006
 ; CHECK-RV64-NEXT:    j .LBB61_508
 ; CHECK-RV64-NEXT:  .LBB61_1006: # %cond.load1969
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19403,7 +19403,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1007
 ; CHECK-RV64-NEXT:    j .LBB61_509
 ; CHECK-RV64-NEXT:  .LBB61_1007: # %cond.load1973
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19416,7 +19416,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1008
 ; CHECK-RV64-NEXT:    j .LBB61_510
 ; CHECK-RV64-NEXT:  .LBB61_1008: # %cond.load1977
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19429,7 +19429,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1009
 ; CHECK-RV64-NEXT:    j .LBB61_511
 ; CHECK-RV64-NEXT:  .LBB61_1009: # %cond.load1981
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19442,7 +19442,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1010
 ; CHECK-RV64-NEXT:    j .LBB61_512
 ; CHECK-RV64-NEXT:  .LBB61_1010: # %cond.load1985
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19455,7 +19455,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1011
 ; CHECK-RV64-NEXT:    j .LBB61_513
 ; CHECK-RV64-NEXT:  .LBB61_1011: # %cond.load1989
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19468,7 +19468,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1012
 ; CHECK-RV64-NEXT:    j .LBB61_514
 ; CHECK-RV64-NEXT:  .LBB61_1012: # %cond.load1993
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19481,7 +19481,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1013
 ; CHECK-RV64-NEXT:    j .LBB61_515
 ; CHECK-RV64-NEXT:  .LBB61_1013: # %cond.load1997
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19494,7 +19494,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1014
 ; CHECK-RV64-NEXT:    j .LBB61_516
 ; CHECK-RV64-NEXT:  .LBB61_1014: # %cond.load2001
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19507,7 +19507,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1015
 ; CHECK-RV64-NEXT:    j .LBB61_517
 ; CHECK-RV64-NEXT:  .LBB61_1015: # %cond.load2005
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19520,7 +19520,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1016
 ; CHECK-RV64-NEXT:    j .LBB61_518
 ; CHECK-RV64-NEXT:  .LBB61_1016: # %cond.load2009
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19533,7 +19533,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1017
 ; CHECK-RV64-NEXT:    j .LBB61_519
 ; CHECK-RV64-NEXT:  .LBB61_1017: # %cond.load2013
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19546,7 +19546,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1018
 ; CHECK-RV64-NEXT:    j .LBB61_520
 ; CHECK-RV64-NEXT:  .LBB61_1018: # %cond.load2017
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19559,7 +19559,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1019
 ; CHECK-RV64-NEXT:    j .LBB61_521
 ; CHECK-RV64-NEXT:  .LBB61_1019: # %cond.load2021
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19572,7 +19572,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1020
 ; CHECK-RV64-NEXT:    j .LBB61_522
 ; CHECK-RV64-NEXT:  .LBB61_1020: # %cond.load2025
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19585,7 +19585,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1021
 ; CHECK-RV64-NEXT:    j .LBB61_523
 ; CHECK-RV64-NEXT:  .LBB61_1021: # %cond.load2029
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19598,7 +19598,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1022
 ; CHECK-RV64-NEXT:    j .LBB61_524
 ; CHECK-RV64-NEXT:  .LBB61_1022: # %cond.load2033
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19611,7 +19611,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a2, .LBB61_1023
 ; CHECK-RV64-NEXT:    j .LBB61_525
 ; CHECK-RV64-NEXT:  .LBB61_1023: # %cond.load2037
-; CHECK-RV64-NEXT:    lbu a2, 0(a0)
+; CHECK-RV64-NEXT:    lb a2, 0(a0)
 ; CHECK-RV64-NEXT:    li a3, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a2
@@ -19623,7 +19623,7 @@ define <512 x i8> @test_expandload_v512i8_vlen512(ptr %base, <512 x i1> %mask, <
 ; CHECK-RV64-NEXT:    bltz a1, .LBB61_1024
 ; CHECK-RV64-NEXT:    j .LBB61_526
 ; CHECK-RV64-NEXT:  .LBB61_1024: # %cond.load2041
-; CHECK-RV64-NEXT:    lbu a0, 0(a0)
+; CHECK-RV64-NEXT:    lb a0, 0(a0)
 ; CHECK-RV64-NEXT:    li a1, 512
 ; CHECK-RV64-NEXT:    vsetvli zero, a1, e8, m8, ta, ma
 ; CHECK-RV64-NEXT:    vmv.s.x v16, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll
index 7a337aa253805..5f164be531278 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vector-i8-index-cornercase.ll
@@ -29,7 +29,7 @@ define <512 x i8> @single_source(<512 x i8> %a) {
 ; CHECK-NEXT:    lbu a0, 770(sp)
 ; CHECK-NEXT:    li a1, 431
 ; CHECK-NEXT:    vslide1down.vx v8, v8, a0
-; CHECK-NEXT:    lbu a0, 1012(sp)
+; CHECK-NEXT:    lb a0, 1012(sp)
 ; CHECK-NEXT:    vsetvli zero, a3, e8, m8, tu, ma
 ; CHECK-NEXT:    vslideup.vx v8, v24, a1
 ; CHECK-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
@@ -127,11 +127,11 @@ define <512 x i8> @two_source(<512 x i8> %a, <512 x i8> %b) {
 ; CHECK-NEXT:    li a3, 465
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m8, ta, ma
 ; CHECK-NEXT:    vse8.v v24, (a1)
-; CHECK-NEXT:    lbu a1, 985(sp)
+; CHECK-NEXT:    lb a1, 985(sp)
 ; CHECK-NEXT:    vsetvli zero, a2, e8, m8, tu, ma
 ; CHECK-NEXT:    vslideup.vx v8, v0, a3
 ; CHECK-NEXT:    li a2, 478
-; CHECK-NEXT:    lbu a3, 1012(sp)
+; CHECK-NEXT:    lb a3, 1012(sp)
 ; CHECK-NEXT:    vmv.s.x v24, a1
 ; CHECK-NEXT:    li a1, 477
 ; CHECK-NEXT:    vsetvli zero, a2, e8, m8, tu, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract-subvector.ll
index e2711a0231509..ce2f133424a08 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract-subvector.ll
@@ -318,11 +318,17 @@ define void @extract_v8i32_nxv16i32_8(<vscale x 16 x i32> %x, ptr %y) {
 }
 
 define void @extract_v8i1_v64i1_0(ptr %x, ptr %y) {
-; CHECK-LABEL: extract_v8i1_v64i1_0:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 0(a0)
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
+; CHECK32-LABEL: extract_v8i1_v64i1_0:
+; CHECK32:       # %bb.0:
+; CHECK32-NEXT:    lbu a0, 0(a0)
+; CHECK32-NEXT:    sb a0, 0(a1)
+; CHECK32-NEXT:    ret
+;
+; CHECK64-LABEL: extract_v8i1_v64i1_0:
+; CHECK64:       # %bb.0:
+; CHECK64-NEXT:    lb a0, 0(a0)
+; CHECK64-NEXT:    sb a0, 0(a1)
+; CHECK64-NEXT:    ret
   %a = load <64 x i1>, ptr %x
   %c = call <8 x i1> @llvm.vector.extract.v8i1.v64i1(<64 x i1> %a, i64 0)
   store <8 x i1> %c, ptr %y
@@ -330,11 +336,17 @@ define void @extract_v8i1_v64i1_0(ptr %x, ptr %y) {
 }
 
 define void @extract_v8i1_v64i1_8(ptr %x, ptr %y) {
-; CHECK-LABEL: extract_v8i1_v64i1_8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 1(a0)
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
+; CHECK32-LABEL: extract_v8i1_v64i1_8:
+; CHECK32:       # %bb.0:
+; CHECK32-NEXT:    lbu a0, 1(a0)
+; CHECK32-NEXT:    sb a0, 0(a1)
+; CHECK32-NEXT:    ret
+;
+; CHECK64-LABEL: extract_v8i1_v64i1_8:
+; CHECK64:       # %bb.0:
+; CHECK64-NEXT:    lb a0, 1(a0)
+; CHECK64-NEXT:    sb a0, 0(a1)
+; CHECK64-NEXT:    ret
   %a = load <64 x i1>, ptr %x
   %c = call <8 x i1> @llvm.vector.extract.v8i1.v64i1(<64 x i1> %a, i64 8)
   store <8 x i1> %c, ptr %y
@@ -342,11 +354,17 @@ define void @extract_v8i1_v64i1_8(ptr %x, ptr %y) {
 }
 
 define void @extract_v8i1_v64i1_48(ptr %x, ptr %y) {
-; CHECK-LABEL: extract_v8i1_v64i1_48:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 6(a0)
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
+; CHECK32-LABEL: extract_v8i1_v64i1_48:
+; CHECK32:       # %bb.0:
+; CHECK32-NEXT:    lbu a0, 6(a0)
+; CHECK32-NEXT:    sb a0, 0(a1)
+; CHECK32-NEXT:    ret
+;
+; CHECK64-LABEL: extract_v8i1_v64i1_48:
+; CHECK64:       # %bb.0:
+; CHECK64-NEXT:    lb a0, 6(a0)
+; CHECK64-NEXT:    sb a0, 0(a1)
+; CHECK64-NEXT:    ret
   %a = load <64 x i1>, ptr %x
   %c = call <8 x i1> @llvm.vector.extract.v8i1.v64i1(<64 x i1> %a, i64 48)
   store <8 x i1> %c, ptr %y
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
index 1fa96d3c07ca9..c74a478ad52ce 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
@@ -1474,7 +1474,7 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ; RV64V-ONLY-NEXT:    .cfi_def_cfa_offset 16
 ; RV64V-ONLY-NEXT:    sd s0, 8(sp) # 8-byte Folded Spill
 ; RV64V-ONLY-NEXT:    .cfi_offset s0, -8
-; RV64V-ONLY-NEXT:    lbu a1, 0(a0)
+; RV64V-ONLY-NEXT:    lb a1, 0(a0)
 ; RV64V-ONLY-NEXT:    lbu a2, 1(a0)
 ; RV64V-ONLY-NEXT:    lbu a3, 2(a0)
 ; RV64V-ONLY-NEXT:    lbu a4, 3(a0)
@@ -1482,7 +1482,7 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ; RV64V-ONLY-NEXT:    lbu a6, 5(a0)
 ; RV64V-ONLY-NEXT:    lbu a7, 6(a0)
 ; RV64V-ONLY-NEXT:    lbu t0, 7(a0)
-; RV64V-ONLY-NEXT:    lbu t1, 8(a0)
+; RV64V-ONLY-NEXT:    lb t1, 8(a0)
 ; RV64V-ONLY-NEXT:    lbu t2, 9(a0)
 ; RV64V-ONLY-NEXT:    lbu t3, 10(a0)
 ; RV64V-ONLY-NEXT:    lbu t4, 11(a0)
@@ -1526,7 +1526,7 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ; RVA22U64-NEXT:    lbu a5, 4(a0)
 ; RVA22U64-NEXT:    lbu a1, 5(a0)
 ; RVA22U64-NEXT:    lbu a7, 6(a0)
-; RVA22U64-NEXT:    lbu t0, 7(a0)
+; RVA22U64-NEXT:    lb t0, 7(a0)
 ; RVA22U64-NEXT:    slli a2, a2, 8
 ; RVA22U64-NEXT:    slli a3, a3, 16
 ; RVA22U64-NEXT:    slli a4, a4, 24
@@ -1550,7 +1550,7 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ; RVA22U64-NEXT:    lbu a2, 12(a0)
 ; RVA22U64-NEXT:    lbu a5, 13(a0)
 ; RVA22U64-NEXT:    lbu a3, 14(a0)
-; RVA22U64-NEXT:    lbu a0, 15(a0)
+; RVA22U64-NEXT:    lb a0, 15(a0)
 ; RVA22U64-NEXT:    slli a2, a2, 32
 ; RVA22U64-NEXT:    slli a5, a5, 40
 ; RVA22U64-NEXT:    or a2, a2, a5
@@ -1570,26 +1570,26 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ;
 ; RVA22U64-PACK-LABEL: buildvec_v16i8_loads_contigous:
 ; RVA22U64-PACK:       # %bb.0:
-; RVA22U64-PACK-NEXT:    lbu a1, 0(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 1(a0)
-; RVA22U64-PACK-NEXT:    lbu a6, 2(a0)
-; RVA22U64-PACK-NEXT:    lbu a7, 3(a0)
-; RVA22U64-PACK-NEXT:    lbu t0, 4(a0)
-; RVA22U64-PACK-NEXT:    lbu a3, 5(a0)
-; RVA22U64-PACK-NEXT:    lbu a4, 6(a0)
-; RVA22U64-PACK-NEXT:    lbu a5, 7(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 0(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 1(a0)
+; RVA22U64-PACK-NEXT:    lb a6, 2(a0)
+; RVA22U64-PACK-NEXT:    lb a7, 3(a0)
+; RVA22U64-PACK-NEXT:    lb t0, 4(a0)
+; RVA22U64-PACK-NEXT:    lb a3, 5(a0)
+; RVA22U64-PACK-NEXT:    lb a4, 6(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 7(a0)
 ; RVA22U64-PACK-NEXT:    packh t1, a1, a2
-; RVA22U64-PACK-NEXT:    lbu t2, 8(a0)
-; RVA22U64-PACK-NEXT:    lbu t3, 9(a0)
-; RVA22U64-PACK-NEXT:    lbu t4, 10(a0)
-; RVA22U64-PACK-NEXT:    lbu a1, 11(a0)
+; RVA22U64-PACK-NEXT:    lb t2, 8(a0)
+; RVA22U64-PACK-NEXT:    lb t3, 9(a0)
+; RVA22U64-PACK-NEXT:    lb t4, 10(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 11(a0)
 ; RVA22U64-PACK-NEXT:    packh a6, a6, a7
 ; RVA22U64-PACK-NEXT:    packh a7, t0, a3
 ; RVA22U64-PACK-NEXT:    packh t0, a4, a5
-; RVA22U64-PACK-NEXT:    lbu a5, 12(a0)
-; RVA22U64-PACK-NEXT:    lbu a3, 13(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 14(a0)
-; RVA22U64-PACK-NEXT:    lbu a0, 15(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 12(a0)
+; RVA22U64-PACK-NEXT:    lb a3, 13(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 14(a0)
+; RVA22U64-PACK-NEXT:    lb a0, 15(a0)
 ; RVA22U64-PACK-NEXT:    packh a4, t2, t3
 ; RVA22U64-PACK-NEXT:    packh a1, t4, a1
 ; RVA22U64-PACK-NEXT:    packh a3, a5, a3
@@ -1611,7 +1611,7 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ; RV64ZVE32-NEXT:    .cfi_def_cfa_offset 16
 ; RV64ZVE32-NEXT:    sd s0, 8(sp) # 8-byte Folded Spill
 ; RV64ZVE32-NEXT:    .cfi_offset s0, -8
-; RV64ZVE32-NEXT:    lbu a1, 0(a0)
+; RV64ZVE32-NEXT:    lb a1, 0(a0)
 ; RV64ZVE32-NEXT:    lbu a2, 1(a0)
 ; RV64ZVE32-NEXT:    lbu a3, 2(a0)
 ; RV64ZVE32-NEXT:    lbu a4, 3(a0)
@@ -1619,7 +1619,7 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
 ; RV64ZVE32-NEXT:    lbu a6, 5(a0)
 ; RV64ZVE32-NEXT:    lbu a7, 6(a0)
 ; RV64ZVE32-NEXT:    lbu t0, 7(a0)
-; RV64ZVE32-NEXT:    lbu t1, 8(a0)
+; RV64ZVE32-NEXT:    lb t1, 8(a0)
 ; RV64ZVE32-NEXT:    lbu t2, 9(a0)
 ; RV64ZVE32-NEXT:    lbu t3, 10(a0)
 ; RV64ZVE32-NEXT:    lbu t4, 11(a0)
@@ -1848,7 +1848,7 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RV64V-ONLY-NEXT:    .cfi_def_cfa_offset 16
 ; RV64V-ONLY-NEXT:    sd s0, 8(sp) # 8-byte Folded Spill
 ; RV64V-ONLY-NEXT:    .cfi_offset s0, -8
-; RV64V-ONLY-NEXT:    lbu a1, 0(a0)
+; RV64V-ONLY-NEXT:    lb a1, 0(a0)
 ; RV64V-ONLY-NEXT:    lbu a2, 1(a0)
 ; RV64V-ONLY-NEXT:    lbu a3, 22(a0)
 ; RV64V-ONLY-NEXT:    lbu a4, 31(a0)
@@ -1856,7 +1856,7 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RV64V-ONLY-NEXT:    lbu a6, 44(a0)
 ; RV64V-ONLY-NEXT:    lbu a7, 55(a0)
 ; RV64V-ONLY-NEXT:    lbu t0, 75(a0)
-; RV64V-ONLY-NEXT:    lbu t1, 82(a0)
+; RV64V-ONLY-NEXT:    lb t1, 82(a0)
 ; RV64V-ONLY-NEXT:    lbu t2, 154(a0)
 ; RV64V-ONLY-NEXT:    lbu t3, 161(a0)
 ; RV64V-ONLY-NEXT:    lbu t4, 163(a0)
@@ -1900,7 +1900,7 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RVA22U64-NEXT:    lbu a6, 623(a0)
 ; RVA22U64-NEXT:    lbu a5, 44(a0)
 ; RVA22U64-NEXT:    lbu a1, 55(a0)
-; RVA22U64-NEXT:    lbu t0, 75(a0)
+; RVA22U64-NEXT:    lb t0, 75(a0)
 ; RVA22U64-NEXT:    lbu t1, 82(a0)
 ; RVA22U64-NEXT:    slli a2, a2, 8
 ; RVA22U64-NEXT:    slli a3, a3, 16
@@ -1920,7 +1920,7 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RVA22U64-NEXT:    or a3, t0, a6
 ; RVA22U64-NEXT:    or a4, t1, a4
 ; RVA22U64-NEXT:    lbu a5, 161(a0)
-; RVA22U64-NEXT:    lbu a1, 154(a0)
+; RVA22U64-NEXT:    lb a1, 154(a0)
 ; RVA22U64-NEXT:    lbu a0, 163(a0)
 ; RVA22U64-NEXT:    slli t4, t4, 16
 ; RVA22U64-NEXT:    slli a5, a5, 24
@@ -1948,26 +1948,26 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RVA22U64-PACK-NEXT:    .cfi_def_cfa_offset 16
 ; RVA22U64-PACK-NEXT:    sd s0, 8(sp) # 8-byte Folded Spill
 ; RVA22U64-PACK-NEXT:    .cfi_offset s0, -8
-; RVA22U64-PACK-NEXT:    lbu a1, 0(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 1(a0)
-; RVA22U64-PACK-NEXT:    lbu a6, 22(a0)
-; RVA22U64-PACK-NEXT:    lbu a7, 31(a0)
-; RVA22U64-PACK-NEXT:    lbu t0, 623(a0)
-; RVA22U64-PACK-NEXT:    lbu t3, 44(a0)
-; RVA22U64-PACK-NEXT:    lbu t4, 55(a0)
-; RVA22U64-PACK-NEXT:    lbu t5, 75(a0)
-; RVA22U64-PACK-NEXT:    lbu t1, 82(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 0(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 1(a0)
+; RVA22U64-PACK-NEXT:    lb a6, 22(a0)
+; RVA22U64-PACK-NEXT:    lb a7, 31(a0)
+; RVA22U64-PACK-NEXT:    lb t0, 623(a0)
+; RVA22U64-PACK-NEXT:    lb t3, 44(a0)
+; RVA22U64-PACK-NEXT:    lb t4, 55(a0)
+; RVA22U64-PACK-NEXT:    lb t5, 75(a0)
+; RVA22U64-PACK-NEXT:    lb t1, 82(a0)
 ; RVA22U64-PACK-NEXT:    packh t2, a1, a2
-; RVA22U64-PACK-NEXT:    lbu t6, 154(a0)
-; RVA22U64-PACK-NEXT:    lbu s0, 161(a0)
-; RVA22U64-PACK-NEXT:    lbu a3, 163(a0)
+; RVA22U64-PACK-NEXT:    lb t6, 154(a0)
+; RVA22U64-PACK-NEXT:    lb s0, 161(a0)
+; RVA22U64-PACK-NEXT:    lb a3, 163(a0)
 ; RVA22U64-PACK-NEXT:    packh a6, a6, a7
 ; RVA22U64-PACK-NEXT:    packh a7, t3, t4
 ; RVA22U64-PACK-NEXT:    packh a2, t0, t5
-; RVA22U64-PACK-NEXT:    lbu a4, 93(a0)
-; RVA22U64-PACK-NEXT:    lbu a5, 105(a0)
-; RVA22U64-PACK-NEXT:    lbu a1, 124(a0)
-; RVA22U64-PACK-NEXT:    lbu a0, 144(a0)
+; RVA22U64-PACK-NEXT:    lb a4, 93(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 105(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 124(a0)
+; RVA22U64-PACK-NEXT:    lb a0, 144(a0)
 ; RVA22U64-PACK-NEXT:    packh a4, t1, a4
 ; RVA22U64-PACK-NEXT:    packh a5, a5, s0
 ; RVA22U64-PACK-NEXT:    packh a1, a1, a3
@@ -1993,7 +1993,7 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RV64ZVE32-NEXT:    .cfi_def_cfa_offset 16
 ; RV64ZVE32-NEXT:    sd s0, 8(sp) # 8-byte Folded Spill
 ; RV64ZVE32-NEXT:    .cfi_offset s0, -8
-; RV64ZVE32-NEXT:    lbu a1, 0(a0)
+; RV64ZVE32-NEXT:    lb a1, 0(a0)
 ; RV64ZVE32-NEXT:    lbu a2, 1(a0)
 ; RV64ZVE32-NEXT:    lbu a3, 22(a0)
 ; RV64ZVE32-NEXT:    lbu a4, 31(a0)
@@ -2001,7 +2001,7 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
 ; RV64ZVE32-NEXT:    lbu a6, 44(a0)
 ; RV64ZVE32-NEXT:    lbu a7, 55(a0)
 ; RV64ZVE32-NEXT:    lbu t0, 75(a0)
-; RV64ZVE32-NEXT:    lbu t1, 82(a0)
+; RV64ZVE32-NEXT:    lb t1, 82(a0)
 ; RV64ZVE32-NEXT:    lbu t2, 154(a0)
 ; RV64ZVE32-NEXT:    lbu t3, 161(a0)
 ; RV64ZVE32-NEXT:    lbu t4, 163(a0)
@@ -2165,7 +2165,7 @@ define <16 x i8> @buildvec_v16i8_undef_low_half(ptr %p) {
 ;
 ; RV64V-ONLY-LABEL: buildvec_v16i8_undef_low_half:
 ; RV64V-ONLY:       # %bb.0:
-; RV64V-ONLY-NEXT:    lbu a1, 82(a0)
+; RV64V-ONLY-NEXT:    lb a1, 82(a0)
 ; RV64V-ONLY-NEXT:    lbu a2, 93(a0)
 ; RV64V-ONLY-NEXT:    lbu a3, 105(a0)
 ; RV64V-ONLY-NEXT:    lbu a4, 124(a0)
@@ -2194,7 +2194,7 @@ define <16 x i8> @buildvec_v16i8_undef_low_half(ptr %p) {
 ; RVA22U64-NEXT:    or a6, a2, a1
 ; RVA22U64-NEXT:    lbu a2, 161(a0)
 ; RVA22U64-NEXT:    lbu a5, 144(a0)
-; RVA22U64-NEXT:    lbu a1, 154(a0)
+; RVA22U64-NEXT:    lb a1, 154(a0)
 ; RVA22U64-NEXT:    lbu a0, 163(a0)
 ; RVA22U64-NEXT:    slli a3, a3, 16
 ; RVA22U64-NEXT:    slli a2, a2, 24
@@ -2215,14 +2215,14 @@ define <16 x i8> @buildvec_v16i8_undef_low_half(ptr %p) {
 ;
 ; RVA22U64-PACK-LABEL: buildvec_v16i8_undef_low_half:
 ; RVA22U64-PACK:       # %bb.0:
-; RVA22U64-PACK-NEXT:    lbu a6, 82(a0)
-; RVA22U64-PACK-NEXT:    lbu a7, 93(a0)
-; RVA22U64-PACK-NEXT:    lbu t0, 105(a0)
-; RVA22U64-PACK-NEXT:    lbu a4, 124(a0)
-; RVA22U64-PACK-NEXT:    lbu a5, 161(a0)
-; RVA22U64-PACK-NEXT:    lbu a1, 163(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 144(a0)
-; RVA22U64-PACK-NEXT:    lbu a0, 154(a0)
+; RVA22U64-PACK-NEXT:    lb a6, 82(a0)
+; RVA22U64-PACK-NEXT:    lb a7, 93(a0)
+; RVA22U64-PACK-NEXT:    lb t0, 105(a0)
+; RVA22U64-PACK-NEXT:    lb a4, 124(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 161(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 163(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 144(a0)
+; RVA22U64-PACK-NEXT:    lb a0, 154(a0)
 ; RVA22U64-PACK-NEXT:    packh a3, a6, a7
 ; RVA22U64-PACK-NEXT:    packh a5, t0, a5
 ; RVA22U64-PACK-NEXT:    packh a1, a4, a1
@@ -2240,7 +2240,7 @@ define <16 x i8> @buildvec_v16i8_undef_low_half(ptr %p) {
 ;
 ; RV64ZVE32-LABEL: buildvec_v16i8_undef_low_half:
 ; RV64ZVE32:       # %bb.0:
-; RV64ZVE32-NEXT:    lbu a1, 82(a0)
+; RV64ZVE32-NEXT:    lb a1, 82(a0)
 ; RV64ZVE32-NEXT:    lbu a2, 93(a0)
 ; RV64ZVE32-NEXT:    lbu a3, 105(a0)
 ; RV64ZVE32-NEXT:    lbu a4, 124(a0)
@@ -2366,7 +2366,7 @@ define <16 x i8> @buildvec_v16i8_undef_high_half(ptr %p) {
 ;
 ; RV64V-ONLY-LABEL: buildvec_v16i8_undef_high_half:
 ; RV64V-ONLY:       # %bb.0:
-; RV64V-ONLY-NEXT:    lbu a1, 0(a0)
+; RV64V-ONLY-NEXT:    lb a1, 0(a0)
 ; RV64V-ONLY-NEXT:    lbu a2, 1(a0)
 ; RV64V-ONLY-NEXT:    lbu a3, 22(a0)
 ; RV64V-ONLY-NEXT:    lbu a4, 31(a0)
@@ -2400,7 +2400,7 @@ define <16 x i8> @buildvec_v16i8_undef_high_half(ptr %p) {
 ; RVA22U64-NEXT:    slli a4, a4, 24
 ; RVA22U64-NEXT:    or a3, a3, a4
 ; RVA22U64-NEXT:    lbu a4, 623(a0)
-; RVA22U64-NEXT:    lbu a0, 75(a0)
+; RVA22U64-NEXT:    lb a0, 75(a0)
 ; RVA22U64-NEXT:    slli a2, a2, 32
 ; RVA22U64-NEXT:    slli a5, a5, 40
 ; RVA22U64-NEXT:    or a2, a2, a5
@@ -2417,14 +2417,14 @@ define <16 x i8> @buildvec_v16i8_undef_high_half(ptr %p) {
 ;
 ; RVA22U64-PACK-LABEL: buildvec_v16i8_undef_high_half:
 ; RVA22U64-PACK:       # %bb.0:
-; RVA22U64-PACK-NEXT:    lbu a6, 0(a0)
-; RVA22U64-PACK-NEXT:    lbu a7, 1(a0)
-; RVA22U64-PACK-NEXT:    lbu t0, 22(a0)
-; RVA22U64-PACK-NEXT:    lbu a4, 31(a0)
-; RVA22U64-PACK-NEXT:    lbu a5, 623(a0)
-; RVA22U64-PACK-NEXT:    lbu a1, 44(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 55(a0)
-; RVA22U64-PACK-NEXT:    lbu a0, 75(a0)
+; RVA22U64-PACK-NEXT:    lb a6, 0(a0)
+; RVA22U64-PACK-NEXT:    lb a7, 1(a0)
+; RVA22U64-PACK-NEXT:    lb t0, 22(a0)
+; RVA22U64-PACK-NEXT:    lb a4, 31(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 623(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 44(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 55(a0)
+; RVA22U64-PACK-NEXT:    lb a0, 75(a0)
 ; RVA22U64-PACK-NEXT:    packh a3, a6, a7
 ; RVA22U64-PACK-NEXT:    packh a4, t0, a4
 ; RVA22U64-PACK-NEXT:    packh a1, a1, a2
@@ -2442,7 +2442,7 @@ define <16 x i8> @buildvec_v16i8_undef_high_half(ptr %p) {
 ;
 ; RV64ZVE32-LABEL: buildvec_v16i8_undef_high_half:
 ; RV64ZVE32:       # %bb.0:
-; RV64ZVE32-NEXT:    lbu a1, 0(a0)
+; RV64ZVE32-NEXT:    lb a1, 0(a0)
 ; RV64ZVE32-NEXT:    lbu a2, 1(a0)
 ; RV64ZVE32-NEXT:    lbu a3, 22(a0)
 ; RV64ZVE32-NEXT:    lbu a4, 31(a0)
@@ -2580,14 +2580,14 @@ define <16 x i8> @buildvec_v16i8_undef_edges(ptr %p) {
 ; RV64V-ONLY-LABEL: buildvec_v16i8_undef_edges:
 ; RV64V-ONLY:       # %bb.0:
 ; RV64V-ONLY-NEXT:    lbu a1, 623(a0)
-; RV64V-ONLY-NEXT:    lbu a2, 31(a0)
+; RV64V-ONLY-NEXT:    lb a2, 31(a0)
 ; RV64V-ONLY-NEXT:    lbu a3, 44(a0)
 ; RV64V-ONLY-NEXT:    lbu a4, 55(a0)
 ; RV64V-ONLY-NEXT:    lbu a5, 75(a0)
 ; RV64V-ONLY-NEXT:    li a6, 255
 ; RV64V-ONLY-NEXT:    vsetivli zero, 1, e16, m1, ta, ma
 ; RV64V-ONLY-NEXT:    vmv.s.x v0, a6
-; RV64V-ONLY-NEXT:    lbu a6, 82(a0)
+; RV64V-ONLY-NEXT:    lb a6, 82(a0)
 ; RV64V-ONLY-NEXT:    lbu a7, 93(a0)
 ; RV64V-ONLY-NEXT:    lbu t0, 105(a0)
 ; RV64V-ONLY-NEXT:    lbu a0, 161(a0)
@@ -2610,8 +2610,8 @@ define <16 x i8> @buildvec_v16i8_undef_edges(ptr %p) {
 ; RVA22U64-NEXT:    lbu a1, 623(a0)
 ; RVA22U64-NEXT:    lbu a2, 44(a0)
 ; RVA22U64-NEXT:    lbu a3, 55(a0)
-; RVA22U64-NEXT:    lbu a6, 31(a0)
-; RVA22U64-NEXT:    lbu a5, 75(a0)
+; RVA22U64-NEXT:    lb a6, 31(a0)
+; RVA22U64-NEXT:    lb a5, 75(a0)
 ; RVA22U64-NEXT:    slli a2, a2, 32
 ; RVA22U64-NEXT:    slli a3, a3, 40
 ; RVA22U64-NEXT:    slli a1, a1, 48
@@ -2638,15 +2638,15 @@ define <16 x i8> @buildvec_v16i8_undef_edges(ptr %p) {
 ;
 ; RVA22U64-PACK-LABEL: buildvec_v16i8_undef_edges:
 ; RVA22U64-PACK:       # %bb.0:
-; RVA22U64-PACK-NEXT:    lbu a7, 623(a0)
-; RVA22U64-PACK-NEXT:    lbu a6, 31(a0)
-; RVA22U64-PACK-NEXT:    lbu t0, 44(a0)
-; RVA22U64-PACK-NEXT:    lbu a4, 55(a0)
-; RVA22U64-PACK-NEXT:    lbu a5, 75(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 82(a0)
-; RVA22U64-PACK-NEXT:    lbu a1, 93(a0)
-; RVA22U64-PACK-NEXT:    lbu a3, 105(a0)
-; RVA22U64-PACK-NEXT:    lbu a0, 161(a0)
+; RVA22U64-PACK-NEXT:    lb a7, 623(a0)
+; RVA22U64-PACK-NEXT:    lb a6, 31(a0)
+; RVA22U64-PACK-NEXT:    lb t0, 44(a0)
+; RVA22U64-PACK-NEXT:    lb a4, 55(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 75(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 82(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 93(a0)
+; RVA22U64-PACK-NEXT:    lb a3, 105(a0)
+; RVA22U64-PACK-NEXT:    lb a0, 161(a0)
 ; RVA22U64-PACK-NEXT:    packh a4, t0, a4
 ; RVA22U64-PACK-NEXT:    packh a5, a7, a5
 ; RVA22U64-PACK-NEXT:    packh a1, a2, a1
@@ -2667,14 +2667,14 @@ define <16 x i8> @buildvec_v16i8_undef_edges(ptr %p) {
 ; RV64ZVE32-LABEL: buildvec_v16i8_undef_edges:
 ; RV64ZVE32:       # %bb.0:
 ; RV64ZVE32-NEXT:    lbu a1, 623(a0)
-; RV64ZVE32-NEXT:    lbu a2, 31(a0)
+; RV64ZVE32-NEXT:    lb a2, 31(a0)
 ; RV64ZVE32-NEXT:    lbu a3, 44(a0)
 ; RV64ZVE32-NEXT:    lbu a4, 55(a0)
 ; RV64ZVE32-NEXT:    lbu a5, 75(a0)
 ; RV64ZVE32-NEXT:    li a6, 255
 ; RV64ZVE32-NEXT:    vsetivli zero, 1, e16, m1, ta, ma
 ; RV64ZVE32-NEXT:    vmv.s.x v0, a6
-; RV64ZVE32-NEXT:    lbu a6, 82(a0)
+; RV64ZVE32-NEXT:    lb a6, 82(a0)
 ; RV64ZVE32-NEXT:    lbu a7, 93(a0)
 ; RV64ZVE32-NEXT:    lbu t0, 105(a0)
 ; RV64ZVE32-NEXT:    lbu a0, 161(a0)
@@ -2820,12 +2820,12 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
 ;
 ; RV64V-ONLY-LABEL: buildvec_v16i8_loads_undef_scattered:
 ; RV64V-ONLY:       # %bb.0:
-; RV64V-ONLY-NEXT:    lbu a1, 0(a0)
+; RV64V-ONLY-NEXT:    lb a1, 0(a0)
 ; RV64V-ONLY-NEXT:    lbu a2, 1(a0)
 ; RV64V-ONLY-NEXT:    lbu a3, 44(a0)
 ; RV64V-ONLY-NEXT:    lbu a4, 55(a0)
 ; RV64V-ONLY-NEXT:    lbu a5, 75(a0)
-; RV64V-ONLY-NEXT:    lbu a6, 82(a0)
+; RV64V-ONLY-NEXT:    lb a6, 82(a0)
 ; RV64V-ONLY-NEXT:    lbu a7, 93(a0)
 ; RV64V-ONLY-NEXT:    lbu t0, 124(a0)
 ; RV64V-ONLY-NEXT:    li t1, 255
@@ -2859,7 +2859,7 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
 ; RVA22U64-NEXT:    lbu a4, 55(a0)
 ; RVA22U64-NEXT:    slli a1, a1, 8
 ; RVA22U64-NEXT:    or a6, a2, a1
-; RVA22U64-NEXT:    lbu a7, 75(a0)
+; RVA22U64-NEXT:    lb a7, 75(a0)
 ; RVA22U64-NEXT:    lbu a5, 82(a0)
 ; RVA22U64-NEXT:    lbu a1, 93(a0)
 ; RVA22U64-NEXT:    lbu a2, 124(a0)
@@ -2867,7 +2867,7 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
 ; RVA22U64-NEXT:    slli a4, a4, 40
 ; RVA22U64-NEXT:    or a3, a3, a4
 ; RVA22U64-NEXT:    lbu a4, 144(a0)
-; RVA22U64-NEXT:    lbu a0, 154(a0)
+; RVA22U64-NEXT:    lb a0, 154(a0)
 ; RVA22U64-NEXT:    slli a1, a1, 8
 ; RVA22U64-NEXT:    or a1, a1, a5
 ; RVA22U64-NEXT:    slli a4, a4, 48
@@ -2886,18 +2886,18 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
 ;
 ; RVA22U64-PACK-LABEL: buildvec_v16i8_loads_undef_scattered:
 ; RVA22U64-PACK:       # %bb.0:
-; RVA22U64-PACK-NEXT:    lbu a1, 0(a0)
-; RVA22U64-PACK-NEXT:    lbu a2, 1(a0)
-; RVA22U64-PACK-NEXT:    lbu a6, 44(a0)
-; RVA22U64-PACK-NEXT:    lbu a7, 55(a0)
-; RVA22U64-PACK-NEXT:    lbu t1, 82(a0)
-; RVA22U64-PACK-NEXT:    lbu a3, 93(a0)
+; RVA22U64-PACK-NEXT:    lb a1, 0(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 1(a0)
+; RVA22U64-PACK-NEXT:    lb a6, 44(a0)
+; RVA22U64-PACK-NEXT:    lb a7, 55(a0)
+; RVA22U64-PACK-NEXT:    lb t1, 82(a0)
+; RVA22U64-PACK-NEXT:    lb a3, 93(a0)
 ; RVA22U64-PACK-NEXT:    packh t0, a1, a2
-; RVA22U64-PACK-NEXT:    lbu a2, 144(a0)
-; RVA22U64-PACK-NEXT:    lbu a4, 154(a0)
+; RVA22U64-PACK-NEXT:    lb a2, 144(a0)
+; RVA22U64-PACK-NEXT:    lb a4, 154(a0)
 ; RVA22U64-PACK-NEXT:    packh a1, a6, a7
-; RVA22U64-PACK-NEXT:    lbu a5, 75(a0)
-; RVA22U64-PACK-NEXT:    lbu a0, 124(a0)
+; RVA22U64-PACK-NEXT:    lb a5, 75(a0)
+; RVA22U64-PACK-NEXT:    lb a0, 124(a0)
 ; RVA22U64-PACK-NEXT:    packh a3, t1, a3
 ; RVA22U64-PACK-NEXT:    packh a2, a2, a4
 ; RVA22U64-PACK-NEXT:    packh a4, a0, a5
@@ -2916,12 +2916,12 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
 ;
 ; RV64ZVE32-LABEL: buildvec_v16i8_loads_undef_scattered:
 ; RV64ZVE32:       # %bb.0:
-; RV64ZVE32-NEXT:    lbu a1, 0(a0)
+; RV64ZVE32-NEXT:    lb a1, 0(a0)
 ; RV64ZVE32-NEXT:    lbu a2, 1(a0)
 ; RV64ZVE32-NEXT:    lbu a3, 44(a0)
 ; RV64ZVE32-NEXT:    lbu a4, 55(a0)
 ; RV64ZVE32-NEXT:    lbu a5, 75(a0)
-; RV64ZVE32-NEXT:    lbu a6, 82(a0)
+; RV64ZVE32-NEXT:    lb a6, 82(a0)
 ; RV64ZVE32-NEXT:    lbu a7, 93(a0)
 ; RV64ZVE32-NEXT:    lbu t0, 124(a0)
 ; RV64ZVE32-NEXT:    li t1, 255
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-vrgather.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-vrgather.ll
index 32c1f2ca32fab..a1df7e84c06ac 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-vrgather.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-vrgather.ll
@@ -3,13 +3,21 @@
 ; RUN: llc -mtriple=riscv64 -mattr=+v -verify-machineinstrs < %s | FileCheck --check-prefixes=CHECK,RV64 %s
 
 define void @gather_const_v16i8(ptr %x) {
-; CHECK-LABEL: gather_const_v16i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a1, 12(a0)
-; CHECK-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
-; CHECK-NEXT:    vmv.v.x v8, a1
-; CHECK-NEXT:    vse8.v v8, (a0)
-; CHECK-NEXT:    ret
+; RV32-LABEL: gather_const_v16i8:
+; RV32:       # %bb.0:
+; RV32-NEXT:    lbu a1, 12(a0)
+; RV32-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
+; RV32-NEXT:    vmv.v.x v8, a1
+; RV32-NEXT:    vse8.v v8, (a0)
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: gather_const_v16i8:
+; RV64:       # %bb.0:
+; RV64-NEXT:    lb a1, 12(a0)
+; RV64-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
+; RV64-NEXT:    vmv.v.x v8, a1
+; RV64-NEXT:    vse8.v v8, (a0)
+; RV64-NEXT:    ret
   %a = load <16 x i8>, ptr %x
   %b = extractelement <16 x i8> %a, i32 12
   %c = insertelement <16 x i8> poison, i8 %b, i32 0
@@ -75,14 +83,23 @@ define void @gather_const_v2i64(ptr %x) {
 }
 
 define void @gather_const_v64i8(ptr %x) {
-; CHECK-LABEL: gather_const_v64i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a1, 32(a0)
-; CHECK-NEXT:    li a2, 64
-; CHECK-NEXT:    vsetvli zero, a2, e8, m4, ta, ma
-; CHECK-NEXT:    vmv.v.x v8, a1
-; CHECK-NEXT:    vse8.v v8, (a0)
-; CHECK-NEXT:    ret
+; RV32-LABEL: gather_const_v64i8:
+; RV32:       # %bb.0:
+; RV32-NEXT:    lbu a1, 32(a0)
+; RV32-NEXT:    li a2, 64
+; RV32-NEXT:    vsetvli zero, a2, e8, m4, ta, ma
+; RV32-NEXT:    vmv.v.x v8, a1
+; RV32-NEXT:    vse8.v v8, (a0)
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: gather_const_v64i8:
+; RV64:       # %bb.0:
+; RV64-NEXT:    lb a1, 32(a0)
+; RV64-NEXT:    li a2, 64
+; RV64-NEXT:    vsetvli zero, a2, e8, m4, ta, ma
+; RV64-NEXT:    vmv.v.x v8, a1
+; RV64-NEXT:    vse8.v v8, (a0)
+; RV64-NEXT:    ret
   %a = load <64 x i8>, ptr %x
   %b = extractelement <64 x i8> %a, i32 32
   %c = insertelement <64 x i8> poison, i8 %b, i32 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-mask-load-store.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-mask-load-store.ll
index b73408d023207..9314a32cf51aa 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-mask-load-store.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-mask-load-store.ll
@@ -3,47 +3,24 @@
 ; RUN: llc -mtriple=riscv64 -mattr=+v -verify-machineinstrs < %s | FileCheck %s
 
 define void @load_store_v1i1(ptr %x, ptr %y) {
-; CHECK-LABEL: load_store_v1i1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 0(a0)
-; CHECK-NEXT:    andi a0, a0, 1
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
   %a = load <1 x i1>, ptr %x
   store <1 x i1> %a, ptr %y
   ret void
 }
 
 define void @load_store_v2i1(ptr %x, ptr %y) {
-; CHECK-LABEL: load_store_v2i1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 0(a0)
-; CHECK-NEXT:    andi a0, a0, 3
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
   %a = load <2 x i1>, ptr %x
   store <2 x i1> %a, ptr %y
   ret void
 }
 
 define void @load_store_v4i1(ptr %x, ptr %y) {
-; CHECK-LABEL: load_store_v4i1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 0(a0)
-; CHECK-NEXT:    andi a0, a0, 15
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
   %a = load <4 x i1>, ptr %x
   store <4 x i1> %a, ptr %y
   ret void
 }
 
 define void @load_store_v8i1(ptr %x, ptr %y) {
-; CHECK-LABEL: load_store_v8i1:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a0, 0(a0)
-; CHECK-NEXT:    sb a0, 0(a1)
-; CHECK-NEXT:    ret
   %a = load <8 x i1>, ptr %x
   store <8 x i1> %a, ptr %y
   ret void
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
index 533b8b6864ebc..34a5a5271cfc0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
@@ -91,13 +91,13 @@ define <2 x i8> @mgather_v2i8(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i8> %passthru)
 ; RV64ZVE32F-NEXT:  .LBB1_2: # %else2
 ; RV64ZVE32F-NEXT:    ret
 ; RV64ZVE32F-NEXT:  .LBB1_3: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB1_2
 ; RV64ZVE32F-NEXT:  .LBB1_4: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -139,14 +139,14 @@ define <2 x i16> @mgather_v2i8_sextload_v2i16(<2 x ptr> %ptrs, <2 x i1> %m, <2 x
 ; RV64ZVE32F-NEXT:    andi a3, a2, 1
 ; RV64ZVE32F-NEXT:    beqz a3, .LBB2_2
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:  .LBB2_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB2_4
 ; RV64ZVE32F-NEXT:  # %bb.3: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -193,14 +193,14 @@ define <2 x i16> @mgather_v2i8_zextload_v2i16(<2 x ptr> %ptrs, <2 x i1> %m, <2 x
 ; RV64ZVE32F-NEXT:    andi a3, a2, 1
 ; RV64ZVE32F-NEXT:    beqz a3, .LBB3_2
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:  .LBB3_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB3_4
 ; RV64ZVE32F-NEXT:  # %bb.3: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -247,14 +247,14 @@ define <2 x i32> @mgather_v2i8_sextload_v2i32(<2 x ptr> %ptrs, <2 x i1> %m, <2 x
 ; RV64ZVE32F-NEXT:    andi a3, a2, 1
 ; RV64ZVE32F-NEXT:    beqz a3, .LBB4_2
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:  .LBB4_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB4_4
 ; RV64ZVE32F-NEXT:  # %bb.3: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -301,14 +301,14 @@ define <2 x i32> @mgather_v2i8_zextload_v2i32(<2 x ptr> %ptrs, <2 x i1> %m, <2 x
 ; RV64ZVE32F-NEXT:    andi a3, a2, 1
 ; RV64ZVE32F-NEXT:    beqz a3, .LBB5_2
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:  .LBB5_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB5_4
 ; RV64ZVE32F-NEXT:  # %bb.3: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -362,14 +362,14 @@ define <2 x i64> @mgather_v2i8_sextload_v2i64(<2 x ptr> %ptrs, <2 x i1> %m, <2 x
 ; RV64ZVE32F-NEXT:    andi a3, a2, 1
 ; RV64ZVE32F-NEXT:    beqz a3, .LBB6_2
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:  .LBB6_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB6_4
 ; RV64ZVE32F-NEXT:  # %bb.3: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -424,14 +424,14 @@ define <2 x i64> @mgather_v2i8_zextload_v2i64(<2 x ptr> %ptrs, <2 x i1> %m, <2 x
 ; RV64ZVE32F-NEXT:    andi a3, a2, 1
 ; RV64ZVE32F-NEXT:    beqz a3, .LBB7_2
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:  .LBB7_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a2, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB7_4
 ; RV64ZVE32F-NEXT:  # %bb.3: # %cond.load1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a1)
+; RV64ZVE32F-NEXT:    lb a0, 0(a1)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, ta, ma
@@ -485,14 +485,14 @@ define <4 x i8> @mgather_v4i8(<4 x ptr> %ptrs, <4 x i1> %m, <4 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    ret
 ; RV64ZVE32F-NEXT:  .LBB8_5: # %cond.load
 ; RV64ZVE32F-NEXT:    ld a2, 0(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a2
 ; RV64ZVE32F-NEXT:    andi a2, a1, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB8_2
 ; RV64ZVE32F-NEXT:  .LBB8_6: # %cond.load1
 ; RV64ZVE32F-NEXT:    ld a2, 8(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf4, tu, ma
@@ -501,7 +501,7 @@ define <4 x i8> @mgather_v4i8(<4 x ptr> %ptrs, <4 x i1> %m, <4 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB8_3
 ; RV64ZVE32F-NEXT:  .LBB8_7: # %cond.load4
 ; RV64ZVE32F-NEXT:    ld a2, 16(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 3, e8, mf4, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 2
@@ -509,7 +509,7 @@ define <4 x i8> @mgather_v4i8(<4 x ptr> %ptrs, <4 x i1> %m, <4 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a1, .LBB8_4
 ; RV64ZVE32F-NEXT:  .LBB8_8: # %cond.load7
 ; RV64ZVE32F-NEXT:    ld a0, 24(a0)
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 4, e8, mf4, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 3
@@ -539,7 +539,7 @@ define <4 x i8> @mgather_truemask_v4i8(<4 x ptr> %ptrs, <4 x i8> %passthru) {
 ; RV64ZVE32F-NEXT:    ld a2, 8(a0)
 ; RV64ZVE32F-NEXT:    ld a3, 16(a0)
 ; RV64ZVE32F-NEXT:    ld a0, 24(a0)
-; RV64ZVE32F-NEXT:    lbu a1, 0(a1)
+; RV64ZVE32F-NEXT:    lb a1, 0(a1)
 ; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
 ; RV64ZVE32F-NEXT:    lbu a3, 0(a3)
 ; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
@@ -621,14 +621,14 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    ret
 ; RV64ZVE32F-NEXT:  .LBB11_9: # %cond.load
 ; RV64ZVE32F-NEXT:    ld a2, 0(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a2
 ; RV64ZVE32F-NEXT:    andi a2, a1, 2
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB11_2
 ; RV64ZVE32F-NEXT:  .LBB11_10: # %cond.load1
 ; RV64ZVE32F-NEXT:    ld a2, 8(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf2, tu, ma
@@ -637,7 +637,7 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB11_3
 ; RV64ZVE32F-NEXT:  .LBB11_11: # %cond.load4
 ; RV64ZVE32F-NEXT:    ld a2, 16(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 3, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 2
@@ -645,7 +645,7 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB11_4
 ; RV64ZVE32F-NEXT:  .LBB11_12: # %cond.load7
 ; RV64ZVE32F-NEXT:    ld a2, 24(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 4, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 3
@@ -653,7 +653,7 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB11_5
 ; RV64ZVE32F-NEXT:  .LBB11_13: # %cond.load10
 ; RV64ZVE32F-NEXT:    ld a2, 32(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 5, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 4
@@ -661,7 +661,7 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB11_6
 ; RV64ZVE32F-NEXT:  .LBB11_14: # %cond.load13
 ; RV64ZVE32F-NEXT:    ld a2, 40(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 6, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 5
@@ -669,7 +669,7 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a2, .LBB11_7
 ; RV64ZVE32F-NEXT:  .LBB11_15: # %cond.load16
 ; RV64ZVE32F-NEXT:    ld a2, 48(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 7, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 6
@@ -677,7 +677,7 @@ define <8 x i8> @mgather_v8i8(<8 x ptr> %ptrs, <8 x i1> %m, <8 x i8> %passthru)
 ; RV64ZVE32F-NEXT:    beqz a1, .LBB11_8
 ; RV64ZVE32F-NEXT:  .LBB11_16: # %cond.load19
 ; RV64ZVE32F-NEXT:    ld a0, 56(a0)
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a0
 ; RV64ZVE32F-NEXT:    vslideup.vi v8, v9, 7
@@ -714,7 +714,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:  # %bb.1: # %cond.load
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:  .LBB12_2: # %else
@@ -725,7 +725,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 1
@@ -737,7 +737,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:  # %bb.5: # %cond.load4
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v11, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 3, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v11, 2
@@ -749,7 +749,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v10, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 4, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 3
@@ -761,7 +761,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:  # %bb.9: # %cond.load10
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 5, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 4
@@ -773,7 +773,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 6, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 5
@@ -785,7 +785,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:  # %bb.13: # %cond.load16
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 7, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 6
@@ -797,7 +797,7 @@ define <8 x i8> @mgather_baseidx_v8i8(ptr %base, <8 x i8> %idxs, <8 x i1> %m, <8
 ; RV64ZVE32F-NEXT:    vslidedown.vi v8, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a1, v8
 ; RV64ZVE32F-NEXT:    add a0, a0, a1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v8, 7
@@ -13280,7 +13280,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, mf2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:  .LBB107_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a1, 2
@@ -13290,7 +13290,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 1
@@ -13302,7 +13302,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.5: # %cond.load4
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v11, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 3, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v11, 2
@@ -13314,7 +13314,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v10, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 4, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 3
@@ -13326,7 +13326,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.9: # %cond.load10
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v11, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 5, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v11, 4
@@ -13338,7 +13338,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v11, v10, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v11
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v11, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 6, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v11, 5
@@ -13350,7 +13350,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.13: # %cond.load16
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v11, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 7, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v11, 6
@@ -13362,7 +13362,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v10, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 8, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 7
@@ -13374,7 +13374,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.17: # %cond.load22
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 9, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 8
@@ -13386,7 +13386,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 10, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 9
@@ -13398,7 +13398,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.21: # %cond.load28
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v11, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 11, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v11, 10
@@ -13410,7 +13410,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v10, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 12, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 11
@@ -13422,7 +13422,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.25: # %cond.load34
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 13, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 12
@@ -13434,7 +13434,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v10
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 14, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 13
@@ -13446,7 +13446,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.29: # %cond.load40
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 15, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v10, 14
@@ -13459,7 +13459,7 @@ define <16 x i8> @mgather_baseidx_v16i8(ptr %base, <16 x i8> %idxs, <16 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v8, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a1, v8
 ; RV64ZVE32F-NEXT:    add a0, a0, a1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v9, v8, 15
@@ -13516,7 +13516,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vsetvli zero, zero, e8, mf4, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v10, a2
 ; RV64ZVE32F-NEXT:  .LBB108_2: # %else
 ; RV64ZVE32F-NEXT:    andi a2, a1, 2
@@ -13526,7 +13526,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v12, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 2, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 1
@@ -13538,7 +13538,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.5: # %cond.load4
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 3, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 2
@@ -13550,7 +13550,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v12, v12, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 4, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 3
@@ -13562,7 +13562,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.9: # %cond.load10
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 5, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 4
@@ -13574,7 +13574,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v13, v12, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v13
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 6, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 5
@@ -13586,7 +13586,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.13: # %cond.load16
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 7, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 6
@@ -13598,7 +13598,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v12, v12, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 8, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 7
@@ -13610,7 +13610,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.17: # %cond.load22
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 9, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 8
@@ -13622,7 +13622,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v13, v12, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v13
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 10, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 9
@@ -13634,7 +13634,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.21: # %cond.load28
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v13
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v14, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 11, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v14, 10
@@ -13646,7 +13646,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v13, v13, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v13
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 12, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 11
@@ -13658,7 +13658,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.25: # %cond.load34
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 13, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 12
@@ -13670,7 +13670,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v13, v12, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v13
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 14, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 13
@@ -13682,7 +13682,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.29: # %cond.load40
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v13, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 15, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v13, 14
@@ -13702,7 +13702,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 18, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 17
@@ -13714,7 +13714,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.35: # %cond.load52
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 19, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 18
@@ -13726,7 +13726,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v9, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 20, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 19
@@ -13738,7 +13738,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.39: # %cond.load58
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 21, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 20
@@ -13750,7 +13750,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v12, v9, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v12
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 22, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 21
@@ -13762,7 +13762,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.43: # %cond.load64
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 23, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 22
@@ -13774,7 +13774,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v9, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 24, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 23
@@ -13786,7 +13786,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.47: # %cond.load70
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 25, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 24
@@ -13798,7 +13798,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 26, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 25
@@ -13810,7 +13810,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.51: # %cond.load76
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 27, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 26
@@ -13822,7 +13822,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v9, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 28, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 27
@@ -13834,7 +13834,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.55: # %cond.load82
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 29, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 28
@@ -13846,7 +13846,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 30, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 29
@@ -13858,7 +13858,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:  # %bb.59: # %cond.load88
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 31, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 30
@@ -13871,7 +13871,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v8, v8, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a1, v8
 ; RV64ZVE32F-NEXT:    add a0, a0, a1
-; RV64ZVE32F-NEXT:    lbu a0, 0(a0)
+; RV64ZVE32F-NEXT:    lb a0, 0(a0)
 ; RV64ZVE32F-NEXT:    li a1, 32
 ; RV64ZVE32F-NEXT:    vmv.s.x v8, a0
 ; RV64ZVE32F-NEXT:    vsetvli zero, a1, e8, m2, ta, ma
@@ -13885,7 +13885,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vslidedown.vi v9, v12, 1
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v9
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v9, a2
 ; RV64ZVE32F-NEXT:    vsetivli zero, 16, e8, m1, tu, ma
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v9, 15
@@ -13895,7 +13895,7 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
 ; RV64ZVE32F-NEXT:    vsetivli zero, 17, e8, m2, tu, ma
 ; RV64ZVE32F-NEXT:    vmv.x.s a2, v8
 ; RV64ZVE32F-NEXT:    add a2, a0, a2
-; RV64ZVE32F-NEXT:    lbu a2, 0(a2)
+; RV64ZVE32F-NEXT:    lb a2, 0(a2)
 ; RV64ZVE32F-NEXT:    vmv.s.x v12, a2
 ; RV64ZVE32F-NEXT:    vslideup.vi v10, v12, 16
 ; RV64ZVE32F-NEXT:    slli a2, a1, 46
@@ -14121,7 +14121,7 @@ define <8 x i16> @mgather_strided_unaligned(ptr %base) {
 ; RV64V-NEXT:    vmv.x.s a3, v12
 ; RV64V-NEXT:    vslidedown.vi v12, v8, 3
 ; RV64V-NEXT:    lbu a4, 0(a0)
-; RV64V-NEXT:    lbu a0, 1(a0)
+; RV64V-NEXT:    lb a0, 1(a0)
 ; RV64V-NEXT:    vmv.x.s a5, v12
 ; RV64V-NEXT:    lbu a6, 0(a2)
 ; RV64V-NEXT:    lbu a2, 1(a2)
@@ -14144,7 +14144,7 @@ define <8 x i16> @mgather_strided_unaligned(ptr %base) {
 ; RV64V-NEXT:    slli a5, a5, 8
 ; RV64V-NEXT:    or a5, a5, t0
 ; RV64V-NEXT:    lbu t0, 0(a3)
-; RV64V-NEXT:    lbu a3, 1(a3)
+; RV64V-NEXT:    lb a3, 1(a3)
 ; RV64V-NEXT:    vmv.v.x v8, a0
 ; RV64V-NEXT:    lbu a0, 0(a4)
 ; RV64V-NEXT:    lbu a4, 1(a4)
@@ -14182,7 +14182,7 @@ define <8 x i16> @mgather_strided_unaligned(ptr %base) {
 ; RV64ZVE32F-LABEL: mgather_strided_unaligned:
 ; RV64ZVE32F:       # %bb.0:
 ; RV64ZVE32F-NEXT:    lbu a1, 0(a0)
-; RV64ZVE32F-NEXT:    lbu a2, 1(a0)
+; RV64ZVE32F-NEXT:    lb a2, 1(a0)
 ; RV64ZVE32F-NEXT:    lbu a3, 4(a0)
 ; RV64ZVE32F-NEXT:    lbu a4, 5(a0)
 ; RV64ZVE32F-NEXT:    lbu a5, 8(a0)
@@ -14194,7 +14194,7 @@ define <8 x i16> @mgather_strided_unaligned(ptr %base) {
 ; RV64ZVE32F-NEXT:    or a1, a2, a1
 ; RV64ZVE32F-NEXT:    or a3, a4, a3
 ; RV64ZVE32F-NEXT:    lbu a2, 16(a0)
-; RV64ZVE32F-NEXT:    lbu a4, 17(a0)
+; RV64ZVE32F-NEXT:    lb a4, 17(a0)
 ; RV64ZVE32F-NEXT:    lbu t1, 20(a0)
 ; RV64ZVE32F-NEXT:    lbu t2, 21(a0)
 ; RV64ZVE32F-NEXT:    slli a6, a6, 8
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll
index d3a36525115c8..3d3d2e7ce9cd3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll
@@ -5842,14 +5842,23 @@ define i8 @vreduce_mul_v1i8(<1 x i8> %v) {
 declare i8 @llvm.vector.reduce.mul.v2i8(<2 x i8>)
 
 define i8 @vreduce_mul_v2i8(ptr %x) {
-; CHECK-LABEL: vreduce_mul_v2i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 2, e8, mf8, ta, ma
-; CHECK-NEXT:    vle8.v v8, (a0)
-; CHECK-NEXT:    lbu a0, 1(a0)
-; CHECK-NEXT:    vmul.vx v8, v8, a0
-; CHECK-NEXT:    vmv.x.s a0, v8
-; CHECK-NEXT:    ret
+; RV32-LABEL: vreduce_mul_v2i8:
+; RV32:       # %bb.0:
+; RV32-NEXT:    vsetivli zero, 2, e8, mf8, ta, ma
+; RV32-NEXT:    vle8.v v8, (a0)
+; RV32-NEXT:    lbu a0, 1(a0)
+; RV32-NEXT:    vmul.vx v8, v8, a0
+; RV32-NEXT:    vmv.x.s a0, v8
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vreduce_mul_v2i8:
+; RV64:       # %bb.0:
+; RV64-NEXT:    vsetivli zero, 2, e8, mf8, ta, ma
+; RV64-NEXT:    vle8.v v8, (a0)
+; RV64-NEXT:    lb a0, 1(a0)
+; RV64-NEXT:    vmul.vx v8, v8, a0
+; RV64-NEXT:    vmv.x.s a0, v8
+; RV64-NEXT:    ret
   %v = load <2 x i8>, ptr %x
   %red = call i8 @llvm.vector.reduce.mul.v2i8(<2 x i8> %v)
   ret i8 %red
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
index 07aa05f609c40..c59698a8c71be 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-load-store-asm.ll
@@ -144,7 +144,7 @@ define void @gather_zero_stride(ptr noalias nocapture %A, ptr noalias nocapture
 ; CHECK-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; CHECK-NEXT:  .LBB3_1: # %vector.body
 ; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT:    lbu a3, 0(a1)
+; CHECK-NEXT:    lb a3, 0(a1)
 ; CHECK-NEXT:    vle8.v v8, (a0)
 ; CHECK-NEXT:    vadd.vx v8, v8, a3
 ; CHECK-NEXT:    vse8.v v8, (a0)
@@ -221,7 +221,7 @@ define void @gather_zero_stride_unfold(ptr noalias nocapture %A, ptr noalias noc
 ; V-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; V-NEXT:  .LBB5_1: # %vector.body
 ; V-NEXT:    # =>This Inner Loop Header: Depth=1
-; V-NEXT:    lbu a3, 0(a1)
+; V-NEXT:    lb a3, 0(a1)
 ; V-NEXT:    vle8.v v8, (a0)
 ; V-NEXT:    vmv.v.x v9, a3
 ; V-NEXT:    vdivu.vv v8, v9, v8
@@ -239,7 +239,7 @@ define void @gather_zero_stride_unfold(ptr noalias nocapture %A, ptr noalias noc
 ; ZVE32F-NEXT:    vsetvli zero, a3, e8, m1, ta, ma
 ; ZVE32F-NEXT:  .LBB5_1: # %vector.body
 ; ZVE32F-NEXT:    # =>This Inner Loop Header: Depth=1
-; ZVE32F-NEXT:    lbu a3, 0(a1)
+; ZVE32F-NEXT:    lb a3, 0(a1)
 ; ZVE32F-NEXT:    vle8.v v8, (a0)
 ; ZVE32F-NEXT:    vmv.v.x v9, a3
 ; ZVE32F-NEXT:    vdivu.vv v8, v9, v8
@@ -939,8 +939,8 @@ define void @strided_load_startval_add_with_splat(ptr noalias nocapture %arg, pt
 ; CHECK-NEXT:    addi a0, a0, 1
 ; CHECK-NEXT:  .LBB14_6: # %bb35
 ; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
-; CHECK-NEXT:    lbu a3, 0(a1)
-; CHECK-NEXT:    lbu a4, 0(a2)
+; CHECK-NEXT:    lb a3, 0(a1)
+; CHECK-NEXT:    lb a4, 0(a2)
 ; CHECK-NEXT:    add a3, a4, a3
 ; CHECK-NEXT:    sb a3, 0(a2)
 ; CHECK-NEXT:    addi a2, a2, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
index 4b7f82f94f5e4..fc1d176eb6043 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-strided-vpload.ll
@@ -711,13 +711,6 @@ define <4 x i8> @zero_strided_unmasked_vpload_4i8_i8(ptr %ptr) {
 ; CHECK-OPT-NEXT:    vsetivli zero, 3, e8, mf4, ta, ma
 ; CHECK-OPT-NEXT:    vlse8.v v8, (a0), zero
 ; CHECK-OPT-NEXT:    ret
-;
-; CHECK-NO-OPT-LABEL: zero_strided_unmasked_vpload_4i8_i8:
-; CHECK-NO-OPT:       # %bb.0:
-; CHECK-NO-OPT-NEXT:    lbu a0, 0(a0)
-; CHECK-NO-OPT-NEXT:    vsetivli zero, 3, e8, mf4, ta, ma
-; CHECK-NO-OPT-NEXT:    vmv.v.x v8, a0
-; CHECK-NO-OPT-NEXT:    ret
   %load = call <4 x i8> @llvm.experimental.vp.strided.load.4i8.p0.i8(ptr %ptr, i8 0, <4 x i1> splat (i1 true), i32 3)
   ret <4 x i8> %load
 }
@@ -765,3 +758,5 @@ define <4 x i64> @zero_strided_vadd.vx(<4 x i64> %v, ptr %ptr) {
   %w = add <4 x i64> %v, %load
   ret <4 x i64> %w
 }
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; CHECK-NO-OPT: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
index b6253c6ea63b2..085bcaa315fba 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-unaligned.ll
@@ -118,7 +118,7 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
 ; RV64-SLOW-NEXT:  # %bb.1: # %cond.load
 ; RV64-SLOW-NEXT:    vsetvli zero, zero, e64, m8, ta, ma
 ; RV64-SLOW-NEXT:    vmv.x.s a1, v8
-; RV64-SLOW-NEXT:    lbu a2, 1(a1)
+; RV64-SLOW-NEXT:    lb a2, 1(a1)
 ; RV64-SLOW-NEXT:    lbu a1, 0(a1)
 ; RV64-SLOW-NEXT:    slli a2, a2, 8
 ; RV64-SLOW-NEXT:    or a1, a2, a1
@@ -131,7 +131,7 @@ define <2 x i16> @mgather_v2i16_align1(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i16> %
 ; RV64-SLOW-NEXT:    vsetivli zero, 1, e64, m1, ta, ma
 ; RV64-SLOW-NEXT:    vslidedown.vi v8, v8, 1
 ; RV64-SLOW-NEXT:    vmv.x.s a0, v8
-; RV64-SLOW-NEXT:    lbu a1, 1(a0)
+; RV64-SLOW-NEXT:    lb a1, 1(a0)
 ; RV64-SLOW-NEXT:    lbu a0, 0(a0)
 ; RV64-SLOW-NEXT:    slli a1, a1, 8
 ; RV64-SLOW-NEXT:    or a0, a1, a0
@@ -204,7 +204,7 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
 ; RV64-SLOW-NEXT:  # %bb.1: # %cond.load
 ; RV64-SLOW-NEXT:    vsetvli zero, zero, e64, m8, tu, ma
 ; RV64-SLOW-NEXT:    vmv.x.s a1, v8
-; RV64-SLOW-NEXT:    lwu a2, 4(a1)
+; RV64-SLOW-NEXT:    lw a2, 4(a1)
 ; RV64-SLOW-NEXT:    lwu a1, 0(a1)
 ; RV64-SLOW-NEXT:    slli a2, a2, 32
 ; RV64-SLOW-NEXT:    or a1, a2, a1
@@ -216,7 +216,7 @@ define <2 x i64> @mgather_v2i64_align4(<2 x ptr> %ptrs, <2 x i1> %m, <2 x i64> %
 ; RV64-SLOW-NEXT:    vsetivli zero, 2, e64, m1, ta, ma
 ; RV64-SLOW-NEXT:    vslidedown.vi v8, v8, 1
 ; RV64-SLOW-NEXT:    vmv.x.s a0, v8
-; RV64-SLOW-NEXT:    lwu a1, 4(a0)
+; RV64-SLOW-NEXT:    lw a1, 4(a0)
 ; RV64-SLOW-NEXT:    lwu a0, 0(a0)
 ; RV64-SLOW-NEXT:    slli a1, a1, 32
 ; RV64-SLOW-NEXT:    or a0, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll
index 1a716f688dd59..351bfd5d06cea 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll
@@ -644,13 +644,21 @@ define <16 x i64> @vwaddu_vx_v16i64(ptr %x, i32 %y) {
 }
 
 define <8 x i16> @vwaddu_vx_v8i16_i8(ptr %x, ptr %y) {
-; CHECK-LABEL: vwaddu_vx_v8i16_i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT:    vle8.v v9, (a0)
-; CHECK-NEXT:    lbu a0, 0(a1)
-; CHECK-NEXT:    vwaddu.vx v8, v9, a0
-; CHECK-NEXT:    ret
+; RV32-LABEL: vwaddu_vx_v8i16_i8:
+; RV32:       # %bb.0:
+; RV32-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; RV32-NEXT:    vle8.v v9, (a0)
+; RV32-NEXT:    lbu a0, 0(a1)
+; RV32-NEXT:    vwaddu.vx v8, v9, a0
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vwaddu_vx_v8i16_i8:
+; RV64:       # %bb.0:
+; RV64-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; RV64-NEXT:    vle8.v v9, (a0)
+; RV64-NEXT:    lb a0, 0(a1)
+; RV64-NEXT:    vwaddu.vx v8, v9, a0
+; RV64-NEXT:    ret
   %a = load <8 x i8>, ptr %x
   %b = load i8, ptr %y
   %c = zext i8 %b to i16
@@ -699,13 +707,21 @@ define <4 x i32> @vwaddu_vx_v4i32_i8(ptr %x, ptr %y) {
 }
 
 define <4 x i32> @vwaddu_vx_v4i32_i16(ptr %x, ptr %y) {
-; CHECK-LABEL: vwaddu_vx_v4i32_i16:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT:    vle16.v v9, (a0)
-; CHECK-NEXT:    lhu a0, 0(a1)
-; CHECK-NEXT:    vwaddu.vx v8, v9, a0
-; CHECK-NEXT:    ret
+; RV32-LABEL: vwaddu_vx_v4i32_i16:
+; RV32:       # %bb.0:
+; RV32-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
+; RV32-NEXT:    vle16.v v9, (a0)
+; RV32-NEXT:    lhu a0, 0(a1)
+; RV32-NEXT:    vwaddu.vx v8, v9, a0
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vwaddu_vx_v4i32_i16:
+; RV64:       # %bb.0:
+; RV64-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
+; RV64-NEXT:    vle16.v v9, (a0)
+; RV64-NEXT:    lh a0, 0(a1)
+; RV64-NEXT:    vwaddu.vx v8, v9, a0
+; RV64-NEXT:    ret
   %a = load <4 x i16>, ptr %x
   %b = load i16, ptr %y
   %c = zext i16 %b to i32
@@ -818,7 +834,7 @@ define <2 x i64> @vwaddu_vx_v2i64_i32(ptr %x, ptr %y) nounwind {
 ; RV64:       # %bb.0:
 ; RV64-NEXT:    vsetivli zero, 2, e32, mf2, ta, ma
 ; RV64-NEXT:    vle32.v v9, (a0)
-; RV64-NEXT:    lwu a0, 0(a1)
+; RV64-NEXT:    lw a0, 0(a1)
 ; RV64-NEXT:    vwaddu.vx v8, v9, a0
 ; RV64-NEXT:    ret
   %a = load <2 x i32>, ptr %x
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll
index 8ebd93e9dc637..afa7d0d62b208 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll
@@ -689,13 +689,21 @@ define <16 x i64> @vwmulsu_vx_v16i64(ptr %x, i32 %y) {
 }
 
 define <8 x i16> @vwmulsu_vx_v8i16_i8(ptr %x, ptr %y) {
-; CHECK-LABEL: vwmulsu_vx_v8i16_i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT:    vle8.v v9, (a0)
-; CHECK-NEXT:    lbu a0, 0(a1)
-; CHECK-NEXT:    vwmulsu.vx v8, v9, a0
-; CHECK-NEXT:    ret
+; RV32-LABEL: vwmulsu_vx_v8i16_i8:
+; RV32:       # %bb.0:
+; RV32-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; RV32-NEXT:    vle8.v v9, (a0)
+; RV32-NEXT:    lbu a0, 0(a1)
+; RV32-NEXT:    vwmulsu.vx v8, v9, a0
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vwmulsu_vx_v8i16_i8:
+; RV64:       # %bb.0:
+; RV64-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; RV64-NEXT:    vle8.v v9, (a0)
+; RV64-NEXT:    lb a0, 0(a1)
+; RV64-NEXT:    vwmulsu.vx v8, v9, a0
+; RV64-NEXT:    ret
   %a = load <8 x i8>, ptr %x
   %b = load i8, ptr %y
   %c = zext i8 %b to i16
@@ -744,13 +752,21 @@ define <4 x i32> @vwmulsu_vx_v4i32_i8(ptr %x, ptr %y) {
 }
 
 define <4 x i32> @vwmulsu_vx_v4i32_i16(ptr %x, ptr %y) {
-; CHECK-LABEL: vwmulsu_vx_v4i32_i16:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT:    vle16.v v9, (a0)
-; CHECK-NEXT:    lhu a0, 0(a1)
-; CHECK-NEXT:    vwmulsu.vx v8, v9, a0
-; CHECK-NEXT:    ret
+; RV32-LABEL: vwmulsu_vx_v4i32_i16:
+; RV32:       # %bb.0:
+; RV32-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
+; RV32-NEXT:    vle16.v v9, (a0)
+; RV32-NEXT:    lhu a0, 0(a1)
+; RV32-NEXT:    vwmulsu.vx v8, v9, a0
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vwmulsu_vx_v4i32_i16:
+; RV64:       # %bb.0:
+; RV64-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
+; RV64-NEXT:    vle16.v v9, (a0)
+; RV64-NEXT:    lh a0, 0(a1)
+; RV64-NEXT:    vwmulsu.vx v8, v9, a0
+; RV64-NEXT:    ret
   %a = load <4 x i16>, ptr %x
   %b = load i16, ptr %y
   %c = zext i16 %b to i32
@@ -853,7 +869,7 @@ define <2 x i64> @vwmulsu_vx_v2i64_i32(ptr %x, ptr %y) {
 ; RV64:       # %bb.0:
 ; RV64-NEXT:    vsetivli zero, 2, e32, mf2, ta, ma
 ; RV64-NEXT:    vle32.v v9, (a0)
-; RV64-NEXT:    lwu a0, 0(a1)
+; RV64-NEXT:    lw a0, 0(a1)
 ; RV64-NEXT:    vwmulsu.vx v8, v9, a0
 ; RV64-NEXT:    ret
   %a = load <2 x i32>, ptr %x
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll
index 90e9ffdcb320a..3495fe48b992e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll
@@ -656,13 +656,6 @@ define <16 x i64> @vwmulu_vx_v16i64(ptr %x, i32 %y) {
 }
 
 define <8 x i16> @vwmulu_vx_v8i16_i8(ptr %x, ptr %y) {
-; CHECK-LABEL: vwmulu_vx_v8i16_i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT:    vle8.v v9, (a0)
-; CHECK-NEXT:    lbu a0, 0(a1)
-; CHECK-NEXT:    vwmulu.vx v8, v9, a0
-; CHECK-NEXT:    ret
   %a = load <8 x i8>, ptr %x
   %b = load i8, ptr %y
   %c = zext i8 %b to i16
@@ -710,13 +703,6 @@ define <4 x i32> @vwmulu_vx_v4i32_i8(ptr %x, ptr %y) {
 }
 
 define <4 x i32> @vwmulu_vx_v4i32_i16(ptr %x, ptr %y) {
-; CHECK-LABEL: vwmulu_vx_v4i32_i16:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT:    vle16.v v9, (a0)
-; CHECK-NEXT:    lhu a0, 0(a1)
-; CHECK-NEXT:    vwmulu.vx v8, v9, a0
-; CHECK-NEXT:    ret
   %a = load <4 x i16>, ptr %x
   %b = load i16, ptr %y
   %c = zext i16 %b to i32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll
index bfdda47cc819e..06f36251c461c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll
@@ -644,14 +644,23 @@ define <16 x i64> @vwsubu_vx_v16i64(ptr %x, i32 %y) {
 }
 
 define <8 x i16> @vwsubu_vx_v8i16_i8(ptr %x, ptr %y) {
-; CHECK-LABEL: vwsubu_vx_v8i16_i8:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lbu a1, 0(a1)
-; CHECK-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT:    vle8.v v9, (a0)
-; CHECK-NEXT:    vmv.v.x v10, a1
-; CHECK-NEXT:    vwsubu.vv v8, v10, v9
-; CHECK-NEXT:    ret
+; RV32-LABEL: vwsubu_vx_v8i16_i8:
+; RV32:       # %bb.0:
+; RV32-NEXT:    lbu a1, 0(a1)
+; RV32-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; RV32-NEXT:    vle8.v v9, (a0)
+; RV32-NEXT:    vmv.v.x v10, a1
+; RV32-NEXT:    vwsubu.vv v8, v10, v9
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vwsubu_vx_v8i16_i8:
+; RV64:       # %bb.0:
+; RV64-NEXT:    lb a1, 0(a1)
+; RV64-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; RV64-NEXT:    vle8.v v9, (a0)
+; RV64-NEXT:    vmv.v.x v10, a1
+; RV64-NEXT:    vwsubu.vv v8, v10, v9
+; RV64-NEXT:    ret
   %a = load <8 x i8>, ptr %x
   %b = load i8, ptr %y
   %c = zext i8 %b to i16
@@ -701,14 +710,23 @@ define <4 x i32> @vwsubu_vx_v4i32_i8(ptr %x, ptr %y) {
 }
 
 define <4 x i32> @vwsubu_vx_v4i32_i16(ptr %x, ptr %y) {
-; CHECK-LABEL: vwsubu_vx_v4i32_i16:
-; CHECK:       # %bb.0:
-; CHECK-NEXT:    lhu a1, 0(a1)
-; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT:    vle16.v v9, (a0)
-; CHECK-NEXT:    vmv.v.x v10, a1
-; CHECK-NEXT:    vwsubu.vv v8, v10, v9
-; CHECK-NEXT:    ret
+; RV32-LABEL: vwsubu_vx_v4i32_i16:
+; RV32:       # %bb.0:
+; RV32-NEXT:    lhu a1, 0(a1)
+; RV32-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
+; RV32-NEXT:    vle16.v v9, (a0)
+; RV32-NEXT:    vmv.v.x v10, a1
+; RV32-NEXT:    vwsubu.vv v8, v10, v9
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vwsubu_vx_v4i32_i16:
+; RV64:       # %bb.0:
+; RV64-NEXT:    lh a1, 0(a1)
+; RV64-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
+; RV64-NEXT:    vle16.v v9, (a0)
+; RV64-NEXT:    vmv.v.x v10, a1
+; RV64-NEXT:    vwsubu.vv v8, v10, v9
+; RV64-NEXT:    ret
   %a = load <4 x i16>, ptr %x
   %b = load i16, ptr %y
   %c = zext i16 %b to i32
@@ -821,7 +839,7 @@ define <2 x i64> @vwsubu_vx_v2i64_i32(ptr %x, ptr %y) nounwind {
 ;
 ; RV64-LABEL: vwsubu_vx_v2i64_i32:
 ; RV64:       # %bb.0:
-; RV64-NEXT:    lwu a1, 0(a1)
+; RV64-NEXT:    lw a1, 0(a1)
 ; RV64-NEXT:    vsetivli zero, 2, e32, mf2, ta, ma
 ; RV64-NEXT:    vle32.v v9, (a0)
 ; RV64-NEXT:    vmv.v.x v10, a1
diff --git a/llvm/test/CodeGen/RISCV/rvv/memcpy-inline.ll b/llvm/test/CodeGen/RISCV/rvv/memcpy-inline.ll
index 2553f563b7d0f..caef35f9924d6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/memcpy-inline.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/memcpy-inline.ll
@@ -21,7 +21,7 @@ define void @unaligned_memcpy1(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: unaligned_memcpy1:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a1, 0(a1)
+; RV64-BOTH-NEXT:    lb a1, 0(a1)
 ; RV64-BOTH-NEXT:    sb a1, 0(a0)
 ; RV64-BOTH-NEXT:    ret
 entry:
@@ -72,7 +72,7 @@ define void @unaligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy3:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 2(a1)
+; RV64-NEXT:    lb a2, 2(a1)
 ; RV64-NEXT:    sb a2, 2(a0)
 ; RV64-NEXT:    vsetivli zero, 2, e8, mf8, ta, ma
 ; RV64-NEXT:    vle8.v v8, (a1)
@@ -89,7 +89,7 @@ define void @unaligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-FAST-LABEL: unaligned_memcpy3:
 ; RV64-FAST:       # %bb.0: # %entry
-; RV64-FAST-NEXT:    lbu a2, 2(a1)
+; RV64-FAST-NEXT:    lb a2, 2(a1)
 ; RV64-FAST-NEXT:    sb a2, 2(a0)
 ; RV64-FAST-NEXT:    lh a1, 0(a1)
 ; RV64-FAST-NEXT:    sh a1, 0(a0)
@@ -147,7 +147,7 @@ define void @unaligned_memcpy7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
 ; RV64-NEXT:    vsetivli zero, 4, e8, mf4, ta, ma
 ; RV64-NEXT:    vle8.v v8, (a1)
@@ -233,7 +233,7 @@ define void @unaligned_memcpy15(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: unaligned_memcpy15:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
 ; RV64-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
 ; RV64-NEXT:    vle8.v v8, (a1)
@@ -552,13 +552,13 @@ define void @unaligned_memcpy196(ptr nocapture %dest, ptr %src) nounwind {
 ; RV64-NEXT:    vle8.v v8, (a2)
 ; RV64-NEXT:    addi a2, a0, 128
 ; RV64-NEXT:    vse8.v v8, (a2)
-; RV64-NEXT:    lbu a2, 195(a1)
+; RV64-NEXT:    lb a2, 195(a1)
 ; RV64-NEXT:    sb a2, 195(a0)
-; RV64-NEXT:    lbu a2, 194(a1)
+; RV64-NEXT:    lb a2, 194(a1)
 ; RV64-NEXT:    sb a2, 194(a0)
-; RV64-NEXT:    lbu a2, 193(a1)
+; RV64-NEXT:    lb a2, 193(a1)
 ; RV64-NEXT:    sb a2, 193(a0)
-; RV64-NEXT:    lbu a1, 192(a1)
+; RV64-NEXT:    lb a1, 192(a1)
 ; RV64-NEXT:    sb a1, 192(a0)
 ; RV64-NEXT:    ret
 ;
@@ -677,7 +677,7 @@ define void @aligned_memcpy3(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-BOTH-LABEL: aligned_memcpy3:
 ; RV64-BOTH:       # %bb.0: # %entry
-; RV64-BOTH-NEXT:    lbu a2, 2(a1)
+; RV64-BOTH-NEXT:    lb a2, 2(a1)
 ; RV64-BOTH-NEXT:    sb a2, 2(a0)
 ; RV64-BOTH-NEXT:    lh a1, 0(a1)
 ; RV64-BOTH-NEXT:    sh a1, 0(a0)
@@ -717,7 +717,7 @@ define void @aligned_memcpy7(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy7:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 6(a1)
+; RV64-NEXT:    lb a2, 6(a1)
 ; RV64-NEXT:    sb a2, 6(a0)
 ; RV64-NEXT:    lh a2, 4(a1)
 ; RV64-NEXT:    sh a2, 4(a0)
@@ -779,7 +779,7 @@ define void @aligned_memcpy15(ptr nocapture %dest, ptr %src) nounwind {
 ;
 ; RV64-LABEL: aligned_memcpy15:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 14(a1)
+; RV64-NEXT:    lb a2, 14(a1)
 ; RV64-NEXT:    sb a2, 14(a0)
 ; RV64-NEXT:    lh a2, 12(a1)
 ; RV64-NEXT:    sh a2, 12(a0)
@@ -1080,7 +1080,7 @@ define i32 @memcpy11_align8(ptr nocapture %dest, ptr %src) {
 ;
 ; RV64-LABEL: memcpy11_align8:
 ; RV64:       # %bb.0: # %entry
-; RV64-NEXT:    lbu a2, 10(a1)
+; RV64-NEXT:    lb a2, 10(a1)
 ; RV64-NEXT:    sb a2, 10(a0)
 ; RV64-NEXT:    lh a2, 8(a1)
 ; RV64-NEXT:    sh a2, 8(a0)
diff --git a/llvm/test/CodeGen/RISCV/rvv/stores-of-loads-merging.ll b/llvm/test/CodeGen/RISCV/rvv/stores-of-loads-merging.ll
index dcdf548020e9d..b024e04bf68a8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/stores-of-loads-merging.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/stores-of-loads-merging.ll
@@ -144,8 +144,8 @@ define void @i8_i16_rotate(ptr %p, ptr %q) {
 ; CHECK-NEXT:    .cfi_offset s0, -16
 ; CHECK-NEXT:    .cfi_offset s1, -24
 ; CHECK-NEXT:    .cfi_offset s2, -32
-; CHECK-NEXT:    lbu s1, 0(a0)
-; CHECK-NEXT:    lbu s2, 1(a0)
+; CHECK-NEXT:    lb s1, 0(a0)
+; CHECK-NEXT:    lb s2, 1(a0)
 ; CHECK-NEXT:    mv s0, a1
 ; CHECK-NEXT:    call g
 ; CHECK-NEXT:    sb s2, 0(s0)
@@ -189,10 +189,10 @@ define void @i8_i16_resched_readnone_ld(ptr %p, ptr %q) {
 ; CHECK-NEXT:    .cfi_offset s1, -24
 ; CHECK-NEXT:    .cfi_offset s2, -32
 ; CHECK-NEXT:    mv s0, a0
-; CHECK-NEXT:    lbu s2, 0(a0)
+; CHECK-NEXT:    lb s2, 0(a0)
 ; CHECK-NEXT:    mv s1, a1
 ; CHECK-NEXT:    call g
-; CHECK-NEXT:    lbu s0, 1(s0)
+; CHECK-NEXT:    lb s0, 1(s0)
 ; CHECK-NEXT:    call g
 ; CHECK-NEXT:    sb s2, 0(s1)
 ; CHECK-NEXT:    sb s0, 1(s1)
@@ -235,8 +235,8 @@ define void @i8_i16_resched_readnone_st(ptr %p, ptr %q) {
 ; CHECK-NEXT:    .cfi_offset s0, -16
 ; CHECK-NEXT:    .cfi_offset s1, -24
 ; CHECK-NEXT:    .cfi_offset s2, -32
-; CHECK-NEXT:    lbu s1, 0(a0)
-; CHECK-NEXT:    lbu s2, 1(a0)
+; CHECK-NEXT:    lb s1, 0(a0)
+; CHECK-NEXT:    lb s2, 1(a0)
 ; CHECK-NEXT:    mv s0, a1
 ; CHECK-NEXT:    call g
 ; CHECK-NEXT:    sb s1, 0(s0)
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll b/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
index ecd098edb30ae..6570e517c203a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-vpload.ll
@@ -879,12 +879,19 @@ define <vscale x 1 x i8> @zero_strided_unmasked_vpload_nxv1i8_i8(ptr %ptr) {
 ; CHECK-OPT-NEXT:    vlse8.v v8, (a0), zero
 ; CHECK-OPT-NEXT:    ret
 ;
-; CHECK-NO-OPT-LABEL: zero_strided_unmasked_vpload_nxv1i8_i8:
-; CHECK-NO-OPT:       # %bb.0:
-; CHECK-NO-OPT-NEXT:    lbu a0, 0(a0)
-; CHECK-NO-OPT-NEXT:    vsetivli zero, 4, e8, mf8, ta, ma
-; CHECK-NO-OPT-NEXT:    vmv.v.x v8, a0
-; CHECK-NO-OPT-NEXT:    ret
+; CHECK-NO-OPT-RV32-LABEL: zero_strided_unmasked_vpload_nxv1i8_i8:
+; CHECK-NO-OPT-RV32:       # %bb.0:
+; CHECK-NO-OPT-RV32-NEXT:    lbu a0, 0(a0)
+; CHECK-NO-OPT-RV32-NEXT:    vsetivli zero, 4, e8, mf8, ta, ma
+; CHECK-NO-OPT-RV32-NEXT:    vmv.v.x v8, a0
+; CHECK-NO-OPT-RV32-NEXT:    ret
+;
+; CHECK-NO-OPT-RV64-LABEL: zero_strided_unmasked_vpload_nxv1i8_i8:
+; CHECK-NO-OPT-RV64:       # %bb.0:
+; CHECK-NO-OPT-RV64-NEXT:    lb a0, 0(a0)
+; CHECK-NO-OPT-RV64-NEXT:    vsetivli zero, 4, e8, mf8, ta, ma
+; CHECK-NO-OPT-RV64-NEXT:    vmv.v.x v8, a0
+; CHECK-NO-OPT-RV64-NEXT:    ret
   %load = call <vscale x 1 x i8> @llvm.experimental.vp.strided.load.nxv1i8.p0.i8(ptr %ptr, i8 0, <vscale x 1 x i1> splat (i1 true), i32 4)
   ret <vscale x 1 x i8> %load
 }
@@ -999,3 +1006,5 @@ define <vscale x 1 x ptr> @zero_strided_vadd_nxv1p0(<vscale x 1 x ptr> %v, ptr %
   %load = call <vscale x 1 x ptr> @llvm.experimental.vp.strided.load.nxv1p0.p0.i32(ptr %ptr, i32 0, <vscale x 1 x i1> splat (i1 true), i32 %vscale)
   ret <vscale x 1 x ptr> %load
 }
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; CHECK-NO-OPT: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
index 93fb230f51ce1..e255e934c10c5 100644
--- a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
@@ -406,7 +406,7 @@ define void @test_srem_vec(ptr %X) nounwind {
 ; RV64-NEXT:    sd s3, 8(sp) # 8-byte Folded Spill
 ; RV64-NEXT:    sd s4, 0(sp) # 8-byte Folded Spill
 ; RV64-NEXT:    mv s0, a0
-; RV64-NEXT:    lbu a0, 12(a0)
+; RV64-NEXT:    lb a0, 12(a0)
 ; RV64-NEXT:    ld s3, 0(s0)
 ; RV64-NEXT:    lwu a1, 8(s0)
 ; RV64-NEXT:    slli a0, a0, 32
@@ -609,7 +609,7 @@ define void @test_srem_vec(ptr %X) nounwind {
 ; RV64M:       # %bb.0:
 ; RV64M-NEXT:    ld a1, 0(a0)
 ; RV64M-NEXT:    lwu a2, 8(a0)
-; RV64M-NEXT:    lbu a3, 12(a0)
+; RV64M-NEXT:    lb a3, 12(a0)
 ; RV64M-NEXT:    lui a4, %hi(.LCPI3_0)
 ; RV64M-NEXT:    lui a5, 699051
 ; RV64M-NEXT:    addi a5, a5, -1365
@@ -794,7 +794,7 @@ define void @test_srem_vec(ptr %X) nounwind {
 ; RV64MV:       # %bb.0:
 ; RV64MV-NEXT:    ld a1, 0(a0)
 ; RV64MV-NEXT:    lwu a2, 8(a0)
-; RV64MV-NEXT:    lbu a3, 12(a0)
+; RV64MV-NEXT:    lb a3, 12(a0)
 ; RV64MV-NEXT:    lui a4, %hi(.LCPI3_0)
 ; RV64MV-NEXT:    lui a5, %hi(.LCPI3_1)
 ; RV64MV-NEXT:    lui a6, %hi(.LCPI3_2)
diff --git a/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll b/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll
index b76f2b2464709..3d8fb153db224 100644
--- a/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll
+++ b/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll
@@ -653,7 +653,7 @@ define void @f11(i32 %vla_size, i64 %i) #0 {
 ; RV64I-NEXT:    blt a0, sp, .LBB11_3
 ; RV64I-NEXT:  # %bb.4:
 ; RV64I-NEXT:    mv sp, a0
-; RV64I-NEXT:    lbu zero, 0(a0)
+; RV64I-NEXT:    lb zero, 0(a0)
 ; RV64I-NEXT:    addi sp, s0, -2032
 ; RV64I-NEXT:    .cfi_def_cfa sp, 2032
 ; RV64I-NEXT:    ld ra, 2024(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/unaligned-load-store.ll b/llvm/test/CodeGen/RISCV/unaligned-load-store.ll
index c9c49e8f7f532..1ef6224fe1b4b 100644
--- a/llvm/test/CodeGen/RISCV/unaligned-load-store.ll
+++ b/llvm/test/CodeGen/RISCV/unaligned-load-store.ll
@@ -63,8 +63,8 @@ define i24 @load_i24(ptr %p) {
 ;
 ; RV64IZBKB-LABEL: load_i24:
 ; RV64IZBKB:       # %bb.0:
-; RV64IZBKB-NEXT:    lbu a1, 0(a0)
-; RV64IZBKB-NEXT:    lbu a2, 1(a0)
+; RV64IZBKB-NEXT:    lb a1, 0(a0)
+; RV64IZBKB-NEXT:    lb a2, 1(a0)
 ; RV64IZBKB-NEXT:    lbu a0, 2(a0)
 ; RV64IZBKB-NEXT:    packh a1, a1, a2
 ; RV64IZBKB-NEXT:    slli a0, a0, 16
@@ -110,8 +110,8 @@ define i32 @load_i32(ptr %p) {
 ;
 ; RV64IZBKB-LABEL: load_i32:
 ; RV64IZBKB:       # %bb.0:
-; RV64IZBKB-NEXT:    lbu a1, 0(a0)
-; RV64IZBKB-NEXT:    lbu a2, 1(a0)
+; RV64IZBKB-NEXT:    lb a1, 0(a0)
+; RV64IZBKB-NEXT:    lb a2, 1(a0)
 ; RV64IZBKB-NEXT:    lbu a3, 2(a0)
 ; RV64IZBKB-NEXT:    lbu a0, 3(a0)
 ; RV64IZBKB-NEXT:    packh a1, a1, a2
@@ -168,7 +168,7 @@ define i64 @load_i64(ptr %p) {
 ; RV64I-NEXT:    lbu a3, 5(a0)
 ; RV64I-NEXT:    lbu a4, 4(a0)
 ; RV64I-NEXT:    lbu a5, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a3, a3, 8
 ; RV64I-NEXT:    or a3, a3, a4
 ; RV64I-NEXT:    slli a5, a5, 16
@@ -200,14 +200,14 @@ define i64 @load_i64(ptr %p) {
 ;
 ; RV64IZBKB-LABEL: load_i64:
 ; RV64IZBKB:       # %bb.0:
-; RV64IZBKB-NEXT:    lbu a1, 4(a0)
-; RV64IZBKB-NEXT:    lbu a2, 5(a0)
-; RV64IZBKB-NEXT:    lbu a3, 6(a0)
-; RV64IZBKB-NEXT:    lbu a4, 7(a0)
-; RV64IZBKB-NEXT:    lbu a5, 0(a0)
-; RV64IZBKB-NEXT:    lbu a6, 1(a0)
-; RV64IZBKB-NEXT:    lbu a7, 2(a0)
-; RV64IZBKB-NEXT:    lbu a0, 3(a0)
+; RV64IZBKB-NEXT:    lb a1, 4(a0)
+; RV64IZBKB-NEXT:    lb a2, 5(a0)
+; RV64IZBKB-NEXT:    lb a3, 6(a0)
+; RV64IZBKB-NEXT:    lb a4, 7(a0)
+; RV64IZBKB-NEXT:    lb a5, 0(a0)
+; RV64IZBKB-NEXT:    lb a6, 1(a0)
+; RV64IZBKB-NEXT:    lb a7, 2(a0)
+; RV64IZBKB-NEXT:    lb a0, 3(a0)
 ; RV64IZBKB-NEXT:    packh a1, a1, a2
 ; RV64IZBKB-NEXT:    packh a2, a3, a4
 ; RV64IZBKB-NEXT:    packh a3, a5, a6
diff --git a/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
index d33c6662ceb5c..13df53088c97d 100644
--- a/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
@@ -525,7 +525,7 @@ define void @test_urem_vec(ptr %X) nounwind {
 ;
 ; RV64M-LABEL: test_urem_vec:
 ; RV64M:       # %bb.0:
-; RV64M-NEXT:    lbu a1, 4(a0)
+; RV64M-NEXT:    lb a1, 4(a0)
 ; RV64M-NEXT:    lwu a2, 0(a0)
 ; RV64M-NEXT:    li a3, 683
 ; RV64M-NEXT:    li a4, 1463
diff --git a/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll b/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
index 3ef9f3f945108..0c54809a81659 100644
--- a/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
+++ b/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
@@ -585,9 +585,9 @@ define <4 x i16> @dont_fold_urem_power_of_two(<4 x i16> %x) nounwind {
 ; RV64I-NEXT:    sd s1, 24(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    sd s2, 16(sp) # 8-byte Folded Spill
 ; RV64I-NEXT:    sd s3, 8(sp) # 8-byte Folded Spill
-; RV64I-NEXT:    lhu s1, 0(a1)
-; RV64I-NEXT:    lhu s2, 8(a1)
-; RV64I-NEXT:    lhu s3, 16(a1)
+; RV64I-NEXT:    lh s1, 0(a1)
+; RV64I-NEXT:    lh s2, 8(a1)
+; RV64I-NEXT:    lh s3, 16(a1)
 ; RV64I-NEXT:    lhu a2, 24(a1)
 ; RV64I-NEXT:    mv s0, a0
 ; RV64I-NEXT:    li a1, 95
@@ -610,9 +610,9 @@ define <4 x i16> @dont_fold_urem_power_of_two(<4 x i16> %x) nounwind {
 ;
 ; RV64IM-LABEL: dont_fold_urem_power_of_two:
 ; RV64IM:       # %bb.0:
-; RV64IM-NEXT:    lhu a2, 0(a1)
-; RV64IM-NEXT:    lhu a3, 8(a1)
-; RV64IM-NEXT:    lhu a4, 16(a1)
+; RV64IM-NEXT:    lh a2, 0(a1)
+; RV64IM-NEXT:    lh a3, 8(a1)
+; RV64IM-NEXT:    lh a4, 16(a1)
 ; RV64IM-NEXT:    lhu a1, 24(a1)
 ; RV64IM-NEXT:    lui a5, %hi(.LCPI3_0)
 ; RV64IM-NEXT:    li a6, 95
diff --git a/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll b/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
index 09b2eeb19a69c..d475787d584ee 100644
--- a/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
+++ b/llvm/test/CodeGen/RISCV/wide-scalar-shift-by-byte-multiple-legalization.ll
@@ -205,7 +205,7 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -213,10 +213,10 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
 ; RV64I-NEXT:    or a5, t0, a7
-; RV64I-NEXT:    lbu a6, 4(a1)
-; RV64I-NEXT:    lbu a7, 5(a1)
-; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t2, 7(a1)
+; RV64I-NEXT:    lb a6, 4(a1)
+; RV64I-NEXT:    lb a7, 5(a1)
+; RV64I-NEXT:    lb t0, 6(a1)
+; RV64I-NEXT:    lb t2, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli a0, a0, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -225,10 +225,10 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a6, a7, a6
 ; RV64I-NEXT:    or a7, t2, t0
-; RV64I-NEXT:    lbu t0, 1(a1)
-; RV64I-NEXT:    lbu t1, 0(a1)
-; RV64I-NEXT:    lbu t2, 2(a1)
-; RV64I-NEXT:    lbu a1, 3(a1)
+; RV64I-NEXT:    lb t0, 1(a1)
+; RV64I-NEXT:    lb t1, 0(a1)
+; RV64I-NEXT:    lb t2, 2(a1)
+; RV64I-NEXT:    lb a1, 3(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -341,7 +341,7 @@ define void @shl_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -349,10 +349,10 @@ define void @shl_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
 ; RV64I-NEXT:    or a5, t0, a7
-; RV64I-NEXT:    lbu a6, 4(a1)
-; RV64I-NEXT:    lbu a7, 5(a1)
-; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t2, 7(a1)
+; RV64I-NEXT:    lb a6, 4(a1)
+; RV64I-NEXT:    lb a7, 5(a1)
+; RV64I-NEXT:    lb t0, 6(a1)
+; RV64I-NEXT:    lb t2, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli a0, a0, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -361,10 +361,10 @@ define void @shl_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a6, a7, a6
 ; RV64I-NEXT:    or a7, t2, t0
-; RV64I-NEXT:    lbu t0, 1(a1)
-; RV64I-NEXT:    lbu t1, 0(a1)
-; RV64I-NEXT:    lbu t2, 2(a1)
-; RV64I-NEXT:    lbu a1, 3(a1)
+; RV64I-NEXT:    lb t0, 1(a1)
+; RV64I-NEXT:    lb t1, 0(a1)
+; RV64I-NEXT:    lb t2, 2(a1)
+; RV64I-NEXT:    lb a1, 3(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -477,7 +477,7 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -485,10 +485,10 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
 ; RV64I-NEXT:    or a5, t0, a7
-; RV64I-NEXT:    lbu a6, 4(a1)
-; RV64I-NEXT:    lbu a7, 5(a1)
-; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t2, 7(a1)
+; RV64I-NEXT:    lb a6, 4(a1)
+; RV64I-NEXT:    lb a7, 5(a1)
+; RV64I-NEXT:    lb t0, 6(a1)
+; RV64I-NEXT:    lb t2, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli a0, a0, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -497,10 +497,10 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a6, a7, a6
 ; RV64I-NEXT:    or a7, t2, t0
-; RV64I-NEXT:    lbu t0, 1(a1)
-; RV64I-NEXT:    lbu t1, 0(a1)
-; RV64I-NEXT:    lbu t2, 2(a1)
-; RV64I-NEXT:    lbu a1, 3(a1)
+; RV64I-NEXT:    lb t0, 1(a1)
+; RV64I-NEXT:    lb t1, 0(a1)
+; RV64I-NEXT:    lb t2, 2(a1)
+; RV64I-NEXT:    lb a1, 3(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -614,7 +614,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 12(a0)
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t2, 15(a0)
+; RV64I-NEXT:    lb t2, 15(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -625,7 +625,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a6, 4(a1)
 ; RV64I-NEXT:    lbu a7, 5(a1)
 ; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t3, 7(a1)
+; RV64I-NEXT:    lb t3, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli t2, t2, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -671,7 +671,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 4(a0)
 ; RV64I-NEXT:    lbu t2, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -853,7 +853,7 @@ define void @lshr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 12(a0)
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t2, 15(a0)
+; RV64I-NEXT:    lb t2, 15(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -864,7 +864,7 @@ define void @lshr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a6, 4(a1)
 ; RV64I-NEXT:    lbu a7, 5(a1)
 ; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t3, 7(a1)
+; RV64I-NEXT:    lb t3, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli t2, t2, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -910,7 +910,7 @@ define void @lshr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 4(a0)
 ; RV64I-NEXT:    lbu t2, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -1068,7 +1068,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -1079,7 +1079,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a6, 4(a1)
 ; RV64I-NEXT:    lbu a7, 5(a1)
 ; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t3, 7(a1)
+; RV64I-NEXT:    lb t3, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli t2, t2, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -1125,7 +1125,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 12(a0)
 ; RV64I-NEXT:    lbu t2, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -1307,7 +1307,7 @@ define void @shl_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -1318,7 +1318,7 @@ define void @shl_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a6, 4(a1)
 ; RV64I-NEXT:    lbu a7, 5(a1)
 ; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t3, 7(a1)
+; RV64I-NEXT:    lb t3, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli t2, t2, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -1364,7 +1364,7 @@ define void @shl_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 12(a0)
 ; RV64I-NEXT:    lbu t2, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -1523,7 +1523,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 12(a0)
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t2, 15(a0)
+; RV64I-NEXT:    lb t2, 15(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -1534,7 +1534,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a6, 4(a1)
 ; RV64I-NEXT:    lbu a7, 5(a1)
 ; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t3, 7(a1)
+; RV64I-NEXT:    lb t3, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli t2, t2, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -1582,7 +1582,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 5(a0)
 ; RV64I-NEXT:    lbu t0, 4(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a7, a7, 8
 ; RV64I-NEXT:    or a7, a7, t0
 ; RV64I-NEXT:    slli t1, t1, 16
@@ -1763,7 +1763,7 @@ define void @ashr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 12(a0)
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t2, 15(a0)
+; RV64I-NEXT:    lb t2, 15(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -1774,7 +1774,7 @@ define void @ashr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a6, 4(a1)
 ; RV64I-NEXT:    lbu a7, 5(a1)
 ; RV64I-NEXT:    lbu t0, 6(a1)
-; RV64I-NEXT:    lbu t3, 7(a1)
+; RV64I-NEXT:    lb t3, 7(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli t2, t2, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -1822,7 +1822,7 @@ define void @ashr_16bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 5(a0)
 ; RV64I-NEXT:    lbu t0, 4(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a7, a7, 8
 ; RV64I-NEXT:    or a7, a7, t0
 ; RV64I-NEXT:    slli t1, t1, 16
@@ -1992,7 +1992,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -2000,7 +2000,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -2018,7 +2018,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -2043,7 +2043,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -2054,10 +2054,10 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -2066,10 +2066,10 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    sd zero, 32(sp)
@@ -2470,7 +2470,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -2478,7 +2478,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -2496,7 +2496,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -2521,7 +2521,7 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -2532,10 +2532,10 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -2544,10 +2544,10 @@ define void @lshr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    sd zero, 32(sp)
@@ -2907,7 +2907,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu t0, 4(a0)
 ; RV64I-NEXT:    lbu s8, 5(a0)
 ; RV64I-NEXT:    lbu s9, 6(a0)
-; RV64I-NEXT:    lbu s10, 7(a0)
+; RV64I-NEXT:    lb s10, 7(a0)
 ; RV64I-NEXT:    lbu s2, 8(a0)
 ; RV64I-NEXT:    lbu s4, 9(a0)
 ; RV64I-NEXT:    lbu s5, 10(a0)
@@ -2915,7 +2915,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s7, 12(a0)
 ; RV64I-NEXT:    lbu s11, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t3, 15(a0)
+; RV64I-NEXT:    lb t3, 15(a0)
 ; RV64I-NEXT:    lbu a3, 16(a0)
 ; RV64I-NEXT:    lbu a6, 17(a0)
 ; RV64I-NEXT:    lbu t4, 18(a0)
@@ -2923,7 +2923,7 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu a4, 20(a0)
 ; RV64I-NEXT:    lbu t6, 21(a0)
 ; RV64I-NEXT:    lbu s0, 22(a0)
-; RV64I-NEXT:    lbu s1, 23(a0)
+; RV64I-NEXT:    lb s1, 23(a0)
 ; RV64I-NEXT:    slli a7, a7, 8
 ; RV64I-NEXT:    slli t2, t2, 16
 ; RV64I-NEXT:    slli s3, s3, 24
@@ -2948,8 +2948,8 @@ define void @lshr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s6, 28(a0)
 ; RV64I-NEXT:    lbu s7, 29(a0)
 ; RV64I-NEXT:    lbu s11, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a0, 31(a0)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    sd zero, 32(sp)
 ; RV64I-NEXT:    sd zero, 40(sp)
 ; RV64I-NEXT:    sd zero, 48(sp)
@@ -3307,7 +3307,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -3315,7 +3315,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -3333,7 +3333,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -3358,7 +3358,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -3369,10 +3369,10 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -3381,10 +3381,10 @@ define void @shl_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    sd zero, 0(sp)
@@ -3785,7 +3785,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -3793,7 +3793,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -3811,7 +3811,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -3836,7 +3836,7 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -3847,10 +3847,10 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -3859,10 +3859,10 @@ define void @shl_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) nounw
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    sd zero, 0(sp)
@@ -4222,7 +4222,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu t0, 4(a0)
 ; RV64I-NEXT:    lbu s8, 5(a0)
 ; RV64I-NEXT:    lbu s9, 6(a0)
-; RV64I-NEXT:    lbu s10, 7(a0)
+; RV64I-NEXT:    lb s10, 7(a0)
 ; RV64I-NEXT:    lbu s2, 8(a0)
 ; RV64I-NEXT:    lbu s4, 9(a0)
 ; RV64I-NEXT:    lbu s5, 10(a0)
@@ -4230,7 +4230,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu s7, 12(a0)
 ; RV64I-NEXT:    lbu s11, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t3, 15(a0)
+; RV64I-NEXT:    lb t3, 15(a0)
 ; RV64I-NEXT:    lbu a3, 16(a0)
 ; RV64I-NEXT:    lbu a6, 17(a0)
 ; RV64I-NEXT:    lbu t4, 18(a0)
@@ -4238,7 +4238,7 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu a4, 20(a0)
 ; RV64I-NEXT:    lbu t6, 21(a0)
 ; RV64I-NEXT:    lbu s0, 22(a0)
-; RV64I-NEXT:    lbu s1, 23(a0)
+; RV64I-NEXT:    lb s1, 23(a0)
 ; RV64I-NEXT:    slli a7, a7, 8
 ; RV64I-NEXT:    slli t2, t2, 16
 ; RV64I-NEXT:    slli s3, s3, 24
@@ -4263,8 +4263,8 @@ define void @shl_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) nou
 ; RV64I-NEXT:    lbu s6, 28(a0)
 ; RV64I-NEXT:    lbu s7, 29(a0)
 ; RV64I-NEXT:    lbu s11, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a0, 31(a0)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    sd zero, 0(sp)
 ; RV64I-NEXT:    sd zero, 8(sp)
 ; RV64I-NEXT:    sd zero, 16(sp)
@@ -4622,7 +4622,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -4630,7 +4630,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -4648,7 +4648,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -4673,7 +4673,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -4684,10 +4684,10 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -4696,10 +4696,10 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %byteOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    slli s7, s7, 16
@@ -5102,7 +5102,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -5110,7 +5110,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -5128,7 +5128,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -5153,7 +5153,7 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -5164,10 +5164,10 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -5176,10 +5176,10 @@ define void @ashr_32bytes_wordOff(ptr %src.ptr, ptr %wordOff.ptr, ptr %dst) noun
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    slli s7, s7, 16
@@ -5541,7 +5541,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu t0, 4(a0)
 ; RV64I-NEXT:    lbu s8, 5(a0)
 ; RV64I-NEXT:    lbu s9, 6(a0)
-; RV64I-NEXT:    lbu s10, 7(a0)
+; RV64I-NEXT:    lb s10, 7(a0)
 ; RV64I-NEXT:    lbu s2, 8(a0)
 ; RV64I-NEXT:    lbu s4, 9(a0)
 ; RV64I-NEXT:    lbu s5, 10(a0)
@@ -5549,7 +5549,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s7, 12(a0)
 ; RV64I-NEXT:    lbu s11, 13(a0)
 ; RV64I-NEXT:    lbu t4, 14(a0)
-; RV64I-NEXT:    lbu t5, 15(a0)
+; RV64I-NEXT:    lb t5, 15(a0)
 ; RV64I-NEXT:    lbu a3, 16(a0)
 ; RV64I-NEXT:    lbu a6, 17(a0)
 ; RV64I-NEXT:    lbu t2, 18(a0)
@@ -5557,7 +5557,7 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu a4, 20(a0)
 ; RV64I-NEXT:    lbu t6, 21(a0)
 ; RV64I-NEXT:    lbu s0, 22(a0)
-; RV64I-NEXT:    lbu s1, 23(a0)
+; RV64I-NEXT:    lb s1, 23(a0)
 ; RV64I-NEXT:    slli a7, a7, 8
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli s3, s3, 24
@@ -5582,8 +5582,8 @@ define void @ashr_32bytes_dwordOff(ptr %src.ptr, ptr %dwordOff.ptr, ptr %dst) no
 ; RV64I-NEXT:    lbu s6, 28(a0)
 ; RV64I-NEXT:    lbu s7, 29(a0)
 ; RV64I-NEXT:    lbu s11, 30(a0)
-; RV64I-NEXT:    lbu a0, 31(a0)
-; RV64I-NEXT:    lbu a1, 0(a1)
+; RV64I-NEXT:    lb a0, 31(a0)
+; RV64I-NEXT:    lb a1, 0(a1)
 ; RV64I-NEXT:    slli t4, t4, 16
 ; RV64I-NEXT:    slli t5, t5, 24
 ; RV64I-NEXT:    or t4, t5, t4
diff --git a/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll b/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
index cd7f30d8f5898..da5e5f1df3616 100644
--- a/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
+++ b/llvm/test/CodeGen/RISCV/wide-scalar-shift-legalization.ll
@@ -196,7 +196,7 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -204,10 +204,10 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
 ; RV64I-NEXT:    or a5, t0, a7
-; RV64I-NEXT:    lbu a6, 0(a1)
-; RV64I-NEXT:    lbu a7, 1(a1)
-; RV64I-NEXT:    lbu t0, 2(a1)
-; RV64I-NEXT:    lbu t2, 3(a1)
+; RV64I-NEXT:    lb a6, 0(a1)
+; RV64I-NEXT:    lb a7, 1(a1)
+; RV64I-NEXT:    lb t0, 2(a1)
+; RV64I-NEXT:    lb t2, 3(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli a0, a0, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -216,10 +216,10 @@ define void @lshr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a6, a7, a6
 ; RV64I-NEXT:    or a7, t2, t0
-; RV64I-NEXT:    lbu t0, 5(a1)
-; RV64I-NEXT:    lbu t1, 4(a1)
-; RV64I-NEXT:    lbu t2, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb t0, 5(a1)
+; RV64I-NEXT:    lb t1, 4(a1)
+; RV64I-NEXT:    lb t2, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -329,7 +329,7 @@ define void @shl_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -337,10 +337,10 @@ define void @shl_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
 ; RV64I-NEXT:    or a5, t0, a7
-; RV64I-NEXT:    lbu a6, 0(a1)
-; RV64I-NEXT:    lbu a7, 1(a1)
-; RV64I-NEXT:    lbu t0, 2(a1)
-; RV64I-NEXT:    lbu t2, 3(a1)
+; RV64I-NEXT:    lb a6, 0(a1)
+; RV64I-NEXT:    lb a7, 1(a1)
+; RV64I-NEXT:    lb t0, 2(a1)
+; RV64I-NEXT:    lb t2, 3(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli a0, a0, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -349,10 +349,10 @@ define void @shl_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a6, a7, a6
 ; RV64I-NEXT:    or a7, t2, t0
-; RV64I-NEXT:    lbu t0, 5(a1)
-; RV64I-NEXT:    lbu t1, 4(a1)
-; RV64I-NEXT:    lbu t2, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb t0, 5(a1)
+; RV64I-NEXT:    lb t1, 4(a1)
+; RV64I-NEXT:    lb t2, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -462,7 +462,7 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -470,10 +470,10 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a3, a4, a3
 ; RV64I-NEXT:    or a4, a6, a5
 ; RV64I-NEXT:    or a5, t0, a7
-; RV64I-NEXT:    lbu a6, 0(a1)
-; RV64I-NEXT:    lbu a7, 1(a1)
-; RV64I-NEXT:    lbu t0, 2(a1)
-; RV64I-NEXT:    lbu t2, 3(a1)
+; RV64I-NEXT:    lb a6, 0(a1)
+; RV64I-NEXT:    lb a7, 1(a1)
+; RV64I-NEXT:    lb t0, 2(a1)
+; RV64I-NEXT:    lb t2, 3(a1)
 ; RV64I-NEXT:    slli t1, t1, 16
 ; RV64I-NEXT:    slli a0, a0, 24
 ; RV64I-NEXT:    slli a7, a7, 8
@@ -482,10 +482,10 @@ define void @ashr_8bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or a0, a0, t1
 ; RV64I-NEXT:    or a6, a7, a6
 ; RV64I-NEXT:    or a7, t2, t0
-; RV64I-NEXT:    lbu t0, 5(a1)
-; RV64I-NEXT:    lbu t1, 4(a1)
-; RV64I-NEXT:    lbu t2, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb t0, 5(a1)
+; RV64I-NEXT:    lb t1, 4(a1)
+; RV64I-NEXT:    lb t2, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -596,7 +596,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 12(a0)
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t2, 15(a0)
+; RV64I-NEXT:    lb t2, 15(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -619,7 +619,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 5(a1)
 ; RV64I-NEXT:    lbu t2, 4(a1)
 ; RV64I-NEXT:    lbu t3, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t2
 ; RV64I-NEXT:    slli t3, t3, 16
@@ -652,7 +652,7 @@ define void @lshr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 4(a0)
 ; RV64I-NEXT:    lbu t2, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -832,7 +832,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -855,7 +855,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 5(a1)
 ; RV64I-NEXT:    lbu t2, 4(a1)
 ; RV64I-NEXT:    lbu t3, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t2
 ; RV64I-NEXT:    slli t3, t3, 16
@@ -888,7 +888,7 @@ define void @shl_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 12(a0)
 ; RV64I-NEXT:    lbu t2, 14(a0)
-; RV64I-NEXT:    lbu a0, 15(a0)
+; RV64I-NEXT:    lb a0, 15(a0)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t1
 ; RV64I-NEXT:    slli t2, t2, 16
@@ -1068,7 +1068,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 12(a0)
 ; RV64I-NEXT:    lbu t0, 13(a0)
 ; RV64I-NEXT:    lbu t1, 14(a0)
-; RV64I-NEXT:    lbu t2, 15(a0)
+; RV64I-NEXT:    lb t2, 15(a0)
 ; RV64I-NEXT:    slli a4, a4, 8
 ; RV64I-NEXT:    slli a5, a5, 16
 ; RV64I-NEXT:    slli a6, a6, 24
@@ -1091,7 +1091,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu t0, 5(a1)
 ; RV64I-NEXT:    lbu t2, 4(a1)
 ; RV64I-NEXT:    lbu t3, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli t0, t0, 8
 ; RV64I-NEXT:    or t0, t0, t2
 ; RV64I-NEXT:    slli t3, t3, 16
@@ -1126,7 +1126,7 @@ define void @ashr_16bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 5(a0)
 ; RV64I-NEXT:    lbu t0, 4(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu a0, 7(a0)
+; RV64I-NEXT:    lb a0, 7(a0)
 ; RV64I-NEXT:    slli a7, a7, 8
 ; RV64I-NEXT:    or a7, a7, t0
 ; RV64I-NEXT:    slli t1, t1, 16
@@ -1319,7 +1319,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -1327,7 +1327,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -1345,7 +1345,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -1370,7 +1370,7 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -1381,10 +1381,10 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -1393,10 +1393,10 @@ define void @lshr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    slli s7, s7, 16
@@ -1795,7 +1795,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -1803,7 +1803,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -1821,7 +1821,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -1846,7 +1846,7 @@ define void @shl_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -1857,10 +1857,10 @@ define void @shl_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -1869,10 +1869,10 @@ define void @shl_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    slli s7, s7, 16
@@ -2271,7 +2271,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu a7, 4(a0)
 ; RV64I-NEXT:    lbu t0, 5(a0)
 ; RV64I-NEXT:    lbu t1, 6(a0)
-; RV64I-NEXT:    lbu t2, 7(a0)
+; RV64I-NEXT:    lb t2, 7(a0)
 ; RV64I-NEXT:    lbu t3, 8(a0)
 ; RV64I-NEXT:    lbu t4, 9(a0)
 ; RV64I-NEXT:    lbu t5, 10(a0)
@@ -2279,7 +2279,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s0, 12(a0)
 ; RV64I-NEXT:    lbu s1, 13(a0)
 ; RV64I-NEXT:    lbu s2, 14(a0)
-; RV64I-NEXT:    lbu s3, 15(a0)
+; RV64I-NEXT:    lb s3, 15(a0)
 ; RV64I-NEXT:    lbu s4, 16(a0)
 ; RV64I-NEXT:    lbu s5, 17(a0)
 ; RV64I-NEXT:    lbu s6, 18(a0)
@@ -2297,7 +2297,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s8, 20(a0)
 ; RV64I-NEXT:    lbu s9, 21(a0)
 ; RV64I-NEXT:    lbu s10, 22(a0)
-; RV64I-NEXT:    lbu s11, 23(a0)
+; RV64I-NEXT:    lb s11, 23(a0)
 ; RV64I-NEXT:    slli t4, t4, 8
 ; RV64I-NEXT:    slli t5, t5, 16
 ; RV64I-NEXT:    slli t6, t6, 24
@@ -2322,7 +2322,7 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    lbu s3, 28(a0)
 ; RV64I-NEXT:    lbu s4, 29(a0)
 ; RV64I-NEXT:    lbu s5, 30(a0)
-; RV64I-NEXT:    lbu s6, 31(a0)
+; RV64I-NEXT:    lb s6, 31(a0)
 ; RV64I-NEXT:    slli s10, s10, 16
 ; RV64I-NEXT:    slli s11, s11, 24
 ; RV64I-NEXT:    slli s0, s0, 8
@@ -2333,10 +2333,10 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or t6, s0, t6
 ; RV64I-NEXT:    or s0, s2, s1
 ; RV64I-NEXT:    or s1, s4, s3
-; RV64I-NEXT:    lbu s2, 0(a1)
-; RV64I-NEXT:    lbu s3, 1(a1)
-; RV64I-NEXT:    lbu s4, 2(a1)
-; RV64I-NEXT:    lbu s7, 3(a1)
+; RV64I-NEXT:    lb s2, 0(a1)
+; RV64I-NEXT:    lb s3, 1(a1)
+; RV64I-NEXT:    lb s4, 2(a1)
+; RV64I-NEXT:    lb s7, 3(a1)
 ; RV64I-NEXT:    slli s5, s5, 16
 ; RV64I-NEXT:    slli s6, s6, 24
 ; RV64I-NEXT:    slli s3, s3, 8
@@ -2345,10 +2345,10 @@ define void @ashr_32bytes(ptr %src.ptr, ptr %bitOff.ptr, ptr %dst) nounwind {
 ; RV64I-NEXT:    or s5, s6, s5
 ; RV64I-NEXT:    or s2, s3, s2
 ; RV64I-NEXT:    or s3, s7, s4
-; RV64I-NEXT:    lbu s4, 5(a1)
-; RV64I-NEXT:    lbu s6, 4(a1)
-; RV64I-NEXT:    lbu s7, 6(a1)
-; RV64I-NEXT:    lbu a1, 7(a1)
+; RV64I-NEXT:    lb s4, 5(a1)
+; RV64I-NEXT:    lb s6, 4(a1)
+; RV64I-NEXT:    lb s7, 6(a1)
+; RV64I-NEXT:    lb a1, 7(a1)
 ; RV64I-NEXT:    slli s4, s4, 8
 ; RV64I-NEXT:    or s4, s4, s6
 ; RV64I-NEXT:    slli s7, s7, 16
diff --git a/llvm/test/CodeGen/RISCV/zdinx-boundary-check.ll b/llvm/test/CodeGen/RISCV/zdinx-boundary-check.ll
index f9db686c9e855..1ef37f73b3b08 100644
--- a/llvm/test/CodeGen/RISCV/zdinx-boundary-check.ll
+++ b/llvm/test/CodeGen/RISCV/zdinx-boundary-check.ll
@@ -242,7 +242,7 @@ define void @foo7(ptr nocapture %p) nounwind {
 ; RV64ZDINX:       # %bb.0: # %entry
 ; RV64ZDINX-NEXT:    lui a1, %hi(d)
 ; RV64ZDINX-NEXT:    addi a2, a1, %lo(d)
-; RV64ZDINX-NEXT:    lwu a2, 8(a2)
+; RV64ZDINX-NEXT:    lw a2, 8(a2)
 ; RV64ZDINX-NEXT:    lwu a1, %lo(d+4)(a1)
 ; RV64ZDINX-NEXT:    slli a2, a2, 32
 ; RV64ZDINX-NEXT:    or a1, a2, a1
@@ -337,7 +337,7 @@ define void @foo9(ptr nocapture %p) nounwind {
 ; RV64ZDINX:       # %bb.0: # %entry
 ; RV64ZDINX-NEXT:    lui a1, %hi(e)
 ; RV64ZDINX-NEXT:    addi a2, a1, %lo(e)
-; RV64ZDINX-NEXT:    lwu a2, 4(a2)
+; RV64ZDINX-NEXT:    lw a2, 4(a2)
 ; RV64ZDINX-NEXT:    lwu a1, %lo(e)(a1)
 ; RV64ZDINX-NEXT:    slli a2, a2, 32
 ; RV64ZDINX-NEXT:    or a1, a2, a1
@@ -480,7 +480,7 @@ define double @foo13(ptr nocapture %p) nounwind {
 ; RV64ZDINX-LABEL: foo13:
 ; RV64ZDINX:       # %bb.0: # %entry
 ; RV64ZDINX-NEXT:    lui a0, %hi(f)
-; RV64ZDINX-NEXT:    lwu a1, %lo(f+8)(a0)
+; RV64ZDINX-NEXT:    lw a1, %lo(f+8)(a0)
 ; RV64ZDINX-NEXT:    lwu a0, %lo(f+4)(a0)
 ; RV64ZDINX-NEXT:    slli a1, a1, 32
 ; RV64ZDINX-NEXT:    or a0, a1, a0



More information about the llvm-commits mailing list