[llvm] [RISCV] Use addi rather than addiw for immediates materialised by lui+addi(w) pairs when possible (PR #141663)
Alex Bradbury via llvm-commits
llvm-commits at lists.llvm.org
Tue May 27 13:00:53 PDT 2025
https://github.com/asb created https://github.com/llvm/llvm-project/pull/141663
The logic in RISCVMatInt would previously produce lui+addiw on RV64 whenever a 32-bit integer must be materialised and the Hi20 and Lo12 parts are non-zero. However, sometimes addi can be used equivalently (whenever the sign extension behaviour of addiw would be a no-op). This patch moves to using addiw only when necessary. Although there is absolutely no advantage in terms of compressibility or performance, this has the following advantages:
* It's more consistent with logic used elsewhere in the backend. For instance, RISCVOptWInstrs will try to convert addiw to addi on the basis it reduces test diffs vs RV32.
* This matches the lowering GCC does in its codegen path. Unlike LLVM, GCC seems to have different expansion logic for the assembler vs codegen. For codegen it will use lui+addi if possible, but expanding `li` in the assembler will always produces lui+addiw as LLVM did prior to this commit. As someone who has been looking at a lot of gcc vs clang diffs lately, reducing unnecessary divergence is of at least some value.
* As the diff for fold-mem-offset.ll shows, it appears we can fold memory offsets in more cases when addi is used. Memory offset folding could be taught to recognise when the addiw could be replaced with an addiw, but that seems unnecessary when we can simply change the logic in RISCVMatInt.
---
Just to underline again, there is no inherent advantage to addi vs addiw. The main itch I'd wanted to scratch was the second bulletpoint above (more closely matching gcc, after seeing the addi vs addiw difference in many comparisons). Trying this on the test suite, the changes are mostly minimal, but there is the occasional case where memory offset folding kicks in but didn't before.
This is far from the norm, but one example from a function in deepsjeng is below. Note: I'm sharing as a point of interest as I hadn't expected any positive codegen changes - I haven't characterised how often this happens and any impact on runtime instcount as I don't think this patch is predicated on being a perf improvement as to my mind it's sufficiently motivated by the first two bulletpoints in the patch description above.
```llvm
; ModuleID = '<stdin>'
source_filename = "<stdin>"
target datalayout = "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128"
target triple = "riscv64-unknown-linux-gnu"
%struct.ham = type { i32, i32, [8 x %struct.zot], [8 x i32], [8 x %struct.quux] }
%struct.zot = type { i32, i32, i32 }
%struct.quux = type { i32, [64 x i32], i64, i64, i64, [13 x i64], i32, i32, [13 x i32], i32, i32, i32, i32, i32, i32, i32, i64, i64, [64 x %struct.wombat], [64 x i32], [64 x i32], [64 x %struct.wombat.0], i64, i64, i32, [64 x i32], i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, [1000 x i64] }
%struct.wombat = type { i32, i32, i32, i32, i64, i64 }
%struct.wombat.0 = type { i32, i32, i32, i32 }
@global = external global %struct.ham
define void @blam() {
bb:
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 16616), i8 0, i64 16, i1 false)
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 29016), i8 0, i64 16, i1 false)
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 41416), i8 0, i64 16, i1 false)
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 53816), i8 0, i64 16, i1 false)
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 66216), i8 0, i64 16, i1 false)
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 78616), i8 0, i64 16, i1 false)
tail call void @llvm.memset.p0.i64(ptr getelementptr inbounds nuw (i8, ptr @global, i64 91016), i8 0, i64 16, i1 false)
ret void
}
; Function Attrs: nocallback nofree nounwind willreturn memory(argmem: write)
declare void @llvm.memset.p0.i64(ptr writeonly captures(none), i8, i64, i1 immarg) #0
attributes #0 = { nocallback nofree nounwind willreturn memory(argmem: write) }
```
Before:
```
lui a0, %hi(global)
addi a0, a0, %lo(global)
lui a1, 4
lui a2, 7
lui a3, 10
lui a4, 13
lui a5, 16
lui a6, 19
lui a7, 22
addiw a1, a1, 232
addiw a2, a2, 344
addiw a3, a3, 456
addiw a4, a4, 568
addiw a5, a5, 680
addiw a6, a6, 792
addiw a7, a7, 904
add a1, a0, a1
add a2, a0, a2
add a3, a0, a3
add a4, a0, a4
add a5, a0, a5
add a6, a0, a6
add a0, a0, a7
sd zero, 0(a1)
sd zero, 8(a1)
sd zero, 0(a2)
sd zero, 8(a2)
sd zero, 0(a3)
sd zero, 8(a3)
sd zero, 0(a4)
sd zero, 8(a4)
sd zero, 0(a5)
sd zero, 8(a5)
sd zero, 0(a6)
sd zero, 8(a6)
sd zero, 0(a0)
sd zero, 8(a0)
ret
```
After:
```
lui a0, %hi(global)
addi a0, a0, %lo(global)
lui a1, 4
lui a2, 7
lui a3, 10
lui a4, 13
lui a5, 16
lui a6, 19
lui a7, 22
add a1, a0, a1
add a2, a0, a2
add a3, a0, a3
add a4, a0, a4
add a5, a0, a5
add a6, a0, a6
add a0, a0, a7
sd zero, 232(a1)
sd zero, 240(a1)
sd zero, 344(a2)
sd zero, 352(a2)
sd zero, 456(a3)
sd zero, 464(a3)
sd zero, 568(a4)
sd zero, 576(a4)
sd zero, 680(a5)
sd zero, 688(a5)
sd zero, 792(a6)
sd zero, 800(a6)
sd zero, 904(a0)
sd zero, 912(a0)
ret
.Lfunc_end0:
```
>From 3089da2dd481e7bea9a4bf86c5b8a5634232f682 Mon Sep 17 00:00:00 2001
From: Alex Bradbury <asb at igalia.com>
Date: Tue, 27 May 2025 20:17:16 +0100
Subject: [PATCH] [RISCV] Use addi rather than addiw for materialised
lui+addi(w) pairs when possible
The logic in RISCVMatInt would previously produce lui+addiw on RV64
whenever a 32-bit integer must be materialised and the Hi20 and Lo12
parts are non-zero. However, sometimes addi can be used equivalently
(whenever the sign extension behaviour of addiw would be a no-op). This
patch moves to using addiw only when necessary. Although there is
absolutely no advantage in terms of compressibility or performance, this
has the following advantages:
* It's more consistent with logic used elsewhere in the backend. For
instance, RISCVOptWInstrs will try to convert addiw to addi on the
basis it reduces test diffs vs RV32.
* This matches the lowering GCC does in its codegen path. Unlike LLVM,
GCC seems to have different expansion logic for the assembler vs
codegen. For codegen it will use lui+addi if possible, but expanding
`li` in the assembler will always produces lui+addiw as LLVM did prior
to this commit. As someone who has been looking at a lot of gcc vs
clang diffs lately, reducing unnecessary divergence is of at least some
value.
* As the diff for fold-mem-offset.ll shows, it appears we can fold
memory offsets in more cases when addi is used. Memory offset folding
could be taught to recognise when the addiw could be replaced with an
addiw, but that seems unnecessary when we can simply change the logic
in RISCVMatInt.
---
.../Target/RISCV/MCTargetDesc/RISCVMatInt.cpp | 10 +-
.../CodeGen/RISCV/GlobalISel/alu-roundtrip.ll | 2 +-
.../test/CodeGen/RISCV/GlobalISel/bitmanip.ll | 8 +-
.../RISCV/GlobalISel/div-by-constant.ll | 14 +-
.../RISCV/GlobalISel/float-intrinsics.ll | 2 +-
.../instruction-select/constant64.mir | 4 +-
.../instruction-select/fp-constant-f16.mir | 23 +-
.../instruction-select/fp-constant.mir | 35 +-
llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll | 46 +--
.../test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll | 8 +-
llvm/test/CodeGen/RISCV/GlobalISel/vararg.ll | 16 +-
llvm/test/CodeGen/RISCV/abdu-neg.ll | 102 ++---
llvm/test/CodeGen/RISCV/abdu.ll | 10 +-
llvm/test/CodeGen/RISCV/addimm-mulimm.ll | 10 +-
llvm/test/CodeGen/RISCV/alu16.ll | 2 +-
llvm/test/CodeGen/RISCV/atomic-rmw.ll | 30 +-
llvm/test/CodeGen/RISCV/atomic-signext.ll | 4 +-
.../CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll | 8 +-
.../CodeGen/RISCV/atomicrmw-uinc-udec-wrap.ll | 8 +-
llvm/test/CodeGen/RISCV/avgceilu.ll | 4 +-
llvm/test/CodeGen/RISCV/avgflooru.ll | 4 +-
llvm/test/CodeGen/RISCV/bittest.ll | 56 ++-
.../CodeGen/RISCV/branch-relaxation-rv64.ll | 14 +-
llvm/test/CodeGen/RISCV/bswap-bitreverse.ll | 116 +++---
llvm/test/CodeGen/RISCV/calling-conv-half.ll | 4 +-
.../calling-conv-lp64-lp64f-lp64d-common.ll | 2 +-
llvm/test/CodeGen/RISCV/codemodel-lowering.ll | 6 +-
llvm/test/CodeGen/RISCV/ctlz-cttz-ctpop.ll | 100 ++---
.../CodeGen/RISCV/ctz_zero_return_test.ll | 6 +-
llvm/test/CodeGen/RISCV/div-by-constant.ll | 32 +-
llvm/test/CodeGen/RISCV/div.ll | 8 +-
llvm/test/CodeGen/RISCV/double-convert.ll | 2 +-
llvm/test/CodeGen/RISCV/float-convert.ll | 22 +-
llvm/test/CodeGen/RISCV/float-imm.ll | 22 +-
llvm/test/CodeGen/RISCV/float-intrinsics.ll | 6 +-
.../test/CodeGen/RISCV/fold-addi-loadstore.ll | 6 +-
llvm/test/CodeGen/RISCV/fold-mem-offset.ll | 5 +-
llvm/test/CodeGen/RISCV/fpclamptosat.ll | 48 +--
llvm/test/CodeGen/RISCV/fpenv.ll | 6 +-
llvm/test/CodeGen/RISCV/half-arith.ll | 44 +--
llvm/test/CodeGen/RISCV/half-convert.ll | 58 +--
llvm/test/CodeGen/RISCV/half-imm.ll | 24 +-
llvm/test/CodeGen/RISCV/half-intrinsics.ll | 18 +-
llvm/test/CodeGen/RISCV/i64-icmp.ll | 22 +-
llvm/test/CodeGen/RISCV/imm.ll | 348 +++++++++---------
.../RISCV/inline-asm-mem-constraint.ll | 18 +-
.../RISCV/lack-of-signed-truncation-check.ll | 2 +-
.../RISCV/local-stack-slot-allocation.ll | 6 +-
...op-strength-reduce-add-cheaper-than-mul.ll | 2 +-
.../CodeGen/RISCV/macro-fusion-lui-addi.ll | 14 +-
llvm/test/CodeGen/RISCV/memset-inline.ll | 16 +-
llvm/test/CodeGen/RISCV/narrow-shl-cst.ll | 4 +-
.../RISCV/out-of-reach-emergency-slot.mir | 10 +-
.../test/CodeGen/RISCV/overflow-intrinsics.ll | 2 +-
llvm/test/CodeGen/RISCV/pr135206.ll | 8 +-
llvm/test/CodeGen/RISCV/pr56457.ll | 8 +-
llvm/test/CodeGen/RISCV/pr58286.ll | 24 +-
llvm/test/CodeGen/RISCV/pr58511.ll | 8 +-
llvm/test/CodeGen/RISCV/pr68855.ll | 2 +-
llvm/test/CodeGen/RISCV/pr69586.ll | 132 +++----
llvm/test/CodeGen/RISCV/pr90730.ll | 2 +-
llvm/test/CodeGen/RISCV/pr95271.ll | 4 +-
llvm/test/CodeGen/RISCV/prefer-w-inst.ll | 2 +-
llvm/test/CodeGen/RISCV/prefetch.ll | 20 +-
llvm/test/CodeGen/RISCV/prolog-epilogue.ll | 12 +-
llvm/test/CodeGen/RISCV/rem.ll | 2 +-
llvm/test/CodeGen/RISCV/rv32i-rv64i-half.ll | 2 +-
llvm/test/CodeGen/RISCV/rv64-float-convert.ll | 8 +-
llvm/test/CodeGen/RISCV/rv64-half-convert.ll | 8 +-
llvm/test/CodeGen/RISCV/rv64-patchpoint.ll | 4 +-
llvm/test/CodeGen/RISCV/rv64xtheadba.ll | 8 +-
llvm/test/CodeGen/RISCV/rv64xtheadbb.ll | 30 +-
llvm/test/CodeGen/RISCV/rv64zba.ll | 10 +-
llvm/test/CodeGen/RISCV/rv64zbb-intrinsic.ll | 2 +-
llvm/test/CodeGen/RISCV/rv64zbb.ll | 78 ++--
llvm/test/CodeGen/RISCV/rv64zbkb.ll | 2 +-
llvm/test/CodeGen/RISCV/rv64zbs.ll | 22 +-
.../CodeGen/RISCV/rvv/bitreverse-sdnode.ll | 32 +-
llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll | 80 ++--
llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll | 22 +-
llvm/test/CodeGen/RISCV/rvv/ctlz-sdnode.ll | 64 ++--
llvm/test/CodeGen/RISCV/rvv/ctpop-sdnode.ll | 32 +-
llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll | 96 ++---
llvm/test/CodeGen/RISCV/rvv/cttz-sdnode.ll | 64 ++--
llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll | 96 ++---
.../CodeGen/RISCV/rvv/extractelt-int-rv64.ll | 4 +-
.../RISCV/rvv/fixed-vectors-bitreverse-vp.ll | 80 ++--
.../RISCV/rvv/fixed-vectors-bitreverse.ll | 16 +-
.../RISCV/rvv/fixed-vectors-bswap-vp.ll | 20 +-
.../CodeGen/RISCV/rvv/fixed-vectors-bswap.ll | 4 +-
.../RISCV/rvv/fixed-vectors-ctlz-vp.ll | 192 +++++-----
.../CodeGen/RISCV/rvv/fixed-vectors-ctlz.ll | 32 +-
.../RISCV/rvv/fixed-vectors-ctpop-vp.ll | 96 ++---
.../CodeGen/RISCV/rvv/fixed-vectors-ctpop.ll | 16 +-
.../RISCV/rvv/fixed-vectors-cttz-vp.ll | 192 +++++-----
.../CodeGen/RISCV/rvv/fixed-vectors-cttz.ll | 32 +-
.../RISCV/rvv/fixed-vectors-extract.ll | 4 +-
.../RISCV/rvv/fixed-vectors-int-buildvec.ll | 2 +-
.../CodeGen/RISCV/rvv/fixed-vectors-int.ll | 10 +-
.../RISCV/rvv/fixed-vectors-masked-gather.ll | 2 +-
.../RISCV/rvv/fixed-vectors-trunc-sat-clip.ll | 4 +-
.../RISCV/rvv/fixed-vectors-zvqdotq.ll | 71 ++--
.../CodeGen/RISCV/rvv/fpclamptosat_vec.ll | 36 +-
llvm/test/CodeGen/RISCV/rvv/frm-insert.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/memset-inline.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/pr88799.ll | 2 +-
.../RISCV/rvv/stack-probing-dynamic.ll | 2 +-
.../CodeGen/RISCV/rvv/stack-probing-rvv.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/stepvector.ll | 4 +-
.../RISCV/rvv/trunc-sat-clip-sdnode.ll | 2 +-
.../RISCV/rvv/vsetvl-cross-inline-asm.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vsplats-i64.ll | 22 +-
llvm/test/CodeGen/RISCV/rvv/zvqdotq-sdnode.ll | 71 ++--
llvm/test/CodeGen/RISCV/sadd_sat.ll | 4 +-
llvm/test/CodeGen/RISCV/sadd_sat_plus.ll | 4 +-
llvm/test/CodeGen/RISCV/select-cc.ll | 4 +-
llvm/test/CodeGen/RISCV/select-const.ll | 4 +-
llvm/test/CodeGen/RISCV/select.ll | 124 ++-----
llvm/test/CodeGen/RISCV/sextw-removal.ll | 12 +-
llvm/test/CodeGen/RISCV/shl-cttz.ll | 8 +-
llvm/test/CodeGen/RISCV/shlimm-addimm.ll | 10 +-
.../CodeGen/RISCV/signed-truncation-check.ll | 6 +-
llvm/test/CodeGen/RISCV/split-offsets.ll | 25 +-
.../CodeGen/RISCV/split-udiv-by-constant.ll | 24 +-
.../CodeGen/RISCV/split-urem-by-constant.ll | 18 +-
llvm/test/CodeGen/RISCV/srem-lkk.ll | 12 +-
.../CodeGen/RISCV/srem-seteq-illegal-types.ll | 8 +-
llvm/test/CodeGen/RISCV/srem-vector-lkk.ll | 8 +-
llvm/test/CodeGen/RISCV/ssub_sat.ll | 4 +-
llvm/test/CodeGen/RISCV/ssub_sat_plus.ll | 4 +-
.../RISCV/stack-clash-prologue-nounwind.ll | 18 +-
.../CodeGen/RISCV/stack-clash-prologue.ll | 18 +-
.../CodeGen/RISCV/stack-inst-compress.mir | 8 +-
llvm/test/CodeGen/RISCV/stack-offset.ll | 20 +-
llvm/test/CodeGen/RISCV/stack-realignment.ll | 4 +-
llvm/test/CodeGen/RISCV/switch-width.ll | 6 +-
llvm/test/CodeGen/RISCV/trunc-nsw-nuw.ll | 2 +-
llvm/test/CodeGen/RISCV/uadd_sat.ll | 4 +-
llvm/test/CodeGen/RISCV/uadd_sat_plus.ll | 4 +-
.../CodeGen/RISCV/urem-seteq-illegal-types.ll | 6 +-
llvm/test/CodeGen/RISCV/urem-vector-lkk.ll | 6 +-
llvm/test/CodeGen/RISCV/usub_sat_plus.ll | 2 +-
llvm/test/CodeGen/RISCV/vararg.ll | 28 +-
.../RISCV/varargs-with-fp-and-second-adj.ll | 2 +-
llvm/test/CodeGen/RISCV/zbb-logic-neg-imm.ll | 21 +-
llvm/test/MC/RISCV/rv64c-aliases-valid.s | 18 +-
llvm/test/MC/RISCV/rv64i-aliases-valid.s | 58 +--
llvm/test/MC/RISCV/rv64zba-aliases-valid.s | 32 +-
llvm/test/MC/RISCV/rv64zbs-aliases-valid.s | 8 +-
150 files changed, 1795 insertions(+), 1972 deletions(-)
diff --git a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMatInt.cpp b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMatInt.cpp
index 8ea2548258fdb..c14361e988de6 100644
--- a/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMatInt.cpp
+++ b/llvm/lib/Target/RISCV/MCTargetDesc/RISCVMatInt.cpp
@@ -94,7 +94,15 @@ static void generateInstSeqImpl(int64_t Val, const MCSubtargetInfo &STI,
Res.emplace_back(RISCV::LUI, Hi20);
if (Lo12 || Hi20 == 0) {
- unsigned AddiOpc = (IsRV64 && Hi20) ? RISCV::ADDIW : RISCV::ADDI;
+ unsigned AddiOpc = RISCV::ADDI;
+ if (IsRV64 && Hi20) {
+ // Use ADDIW rather than ADDI only when necessary for correctness. As
+ // noted in RISCVOptWInstrs, this helps reduce test differences vs
+ // RV32 without being a pessimization.
+ int64_t LuiRes = SignExtend64<32>(Hi20 << 12);
+ if (LuiRes + Lo12 != SignExtend64<32>(LuiRes + Lo12))
+ AddiOpc = RISCV::ADDIW;
+ }
Res.emplace_back(AddiOpc, Lo12);
}
return;
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/alu-roundtrip.ll b/llvm/test/CodeGen/RISCV/GlobalISel/alu-roundtrip.ll
index 1632f92e96b50..487cb5768dcad 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/alu-roundtrip.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/alu-roundtrip.ll
@@ -468,7 +468,7 @@ define i64 @subi_i64(i64 %a) {
; RV64IM-LABEL: subi_i64:
; RV64IM: # %bb.0: # %entry
; RV64IM-NEXT: lui a1, 1048275
-; RV64IM-NEXT: addiw a1, a1, -1548
+; RV64IM-NEXT: addi a1, a1, -1548
; RV64IM-NEXT: add a0, a0, a1
; RV64IM-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/bitmanip.ll b/llvm/test/CodeGen/RISCV/GlobalISel/bitmanip.ll
index bce6dfacf8e82..68bc1e5db6095 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/bitmanip.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/bitmanip.ll
@@ -174,8 +174,8 @@ define i24 @bitreverse_i24(i24 %x) {
; RV64-NEXT: slli a1, a0, 16
; RV64-NEXT: lui a2, 4096
; RV64-NEXT: lui a3, 1048335
-; RV64-NEXT: addiw a2, a2, -1
-; RV64-NEXT: addiw a3, a3, 240
+; RV64-NEXT: addi a2, a2, -1
+; RV64-NEXT: addi a3, a3, 240
; RV64-NEXT: and a0, a0, a2
; RV64-NEXT: srli a0, a0, 16
; RV64-NEXT: or a0, a0, a1
@@ -184,7 +184,7 @@ define i24 @bitreverse_i24(i24 %x) {
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: and a0, a0, a3
; RV64-NEXT: lui a3, 1047757
-; RV64-NEXT: addiw a3, a3, -820
+; RV64-NEXT: addi a3, a3, -820
; RV64-NEXT: srli a1, a1, 4
; RV64-NEXT: or a0, a1, a0
; RV64-NEXT: and a1, a3, a2
@@ -192,7 +192,7 @@ define i24 @bitreverse_i24(i24 %x) {
; RV64-NEXT: slli a0, a0, 2
; RV64-NEXT: and a0, a0, a3
; RV64-NEXT: lui a3, 1047211
-; RV64-NEXT: addiw a3, a3, -1366
+; RV64-NEXT: addi a3, a3, -1366
; RV64-NEXT: and a2, a3, a2
; RV64-NEXT: srli a1, a1, 2
; RV64-NEXT: or a0, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/div-by-constant.ll b/llvm/test/CodeGen/RISCV/GlobalISel/div-by-constant.ll
index 9c46e6792e8d8..94b8afcabbd52 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/div-by-constant.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/div-by-constant.ll
@@ -25,7 +25,7 @@ define i32 @udiv_constant_no_add(i32 %a) nounwind {
; RV64IM-NEXT: slli a0, a0, 32
; RV64IM-NEXT: lui a1, 205
; RV64IM-NEXT: srli a0, a0, 32
-; RV64IM-NEXT: addiw a1, a1, -819
+; RV64IM-NEXT: addi a1, a1, -819
; RV64IM-NEXT: slli a1, a1, 12
; RV64IM-NEXT: addi a1, a1, -819
; RV64IM-NEXT: mul a0, a0, a1
@@ -62,7 +62,7 @@ define i32 @udiv_constant_add(i32 %a) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: lui a1, 149797
; RV64IM-NEXT: slli a2, a0, 32
-; RV64IM-NEXT: addiw a1, a1, -1755
+; RV64IM-NEXT: addi a1, a1, -1755
; RV64IM-NEXT: srli a2, a2, 32
; RV64IM-NEXT: mul a1, a2, a1
; RV64IM-NEXT: srli a1, a1, 32
@@ -75,7 +75,7 @@ define i32 @udiv_constant_add(i32 %a) nounwind {
; RV64IMZB-LABEL: udiv_constant_add:
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: lui a1, 149797
-; RV64IMZB-NEXT: addiw a1, a1, -1755
+; RV64IMZB-NEXT: addi a1, a1, -1755
; RV64IMZB-NEXT: zext.w a2, a0
; RV64IMZB-NEXT: mul a1, a2, a1
; RV64IMZB-NEXT: srli a1, a1, 32
@@ -301,7 +301,7 @@ define i16 @udiv16_constant_no_add(i16 %a) nounwind {
; RV64IM-NEXT: slli a0, a0, 48
; RV64IM-NEXT: lui a1, 13
; RV64IM-NEXT: srli a0, a0, 48
-; RV64IM-NEXT: addiw a1, a1, -819
+; RV64IM-NEXT: addi a1, a1, -819
; RV64IM-NEXT: mul a0, a0, a1
; RV64IM-NEXT: srli a0, a0, 18
; RV64IM-NEXT: ret
@@ -310,7 +310,7 @@ define i16 @udiv16_constant_no_add(i16 %a) nounwind {
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: zext.h a0, a0
; RV64IMZB-NEXT: lui a1, 13
-; RV64IMZB-NEXT: addiw a1, a1, -819
+; RV64IMZB-NEXT: addi a1, a1, -819
; RV64IMZB-NEXT: mul a0, a0, a1
; RV64IMZB-NEXT: srli a0, a0, 18
; RV64IMZB-NEXT: ret
@@ -355,8 +355,8 @@ define i16 @udiv16_constant_add(i16 %a) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: lui a1, 2
; RV64IM-NEXT: lui a2, 16
-; RV64IM-NEXT: addiw a1, a1, 1171
-; RV64IM-NEXT: addiw a2, a2, -1
+; RV64IM-NEXT: addi a1, a1, 1171
+; RV64IM-NEXT: addi a2, a2, -1
; RV64IM-NEXT: and a3, a0, a2
; RV64IM-NEXT: mul a1, a3, a1
; RV64IM-NEXT: srli a1, a1, 16
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/float-intrinsics.ll b/llvm/test/CodeGen/RISCV/GlobalISel/float-intrinsics.ll
index 05730a710b4d8..88413291c26cd 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/float-intrinsics.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/float-intrinsics.ll
@@ -1002,7 +1002,7 @@ define i1 @fpclass(float %x) {
; RV64I-NEXT: lui a4, 2048
; RV64I-NEXT: lui a5, 520192
; RV64I-NEXT: srli a2, a2, 33
-; RV64I-NEXT: addiw a6, a4, -1
+; RV64I-NEXT: addi a6, a4, -1
; RV64I-NEXT: xor a0, a0, a2
; RV64I-NEXT: subw a3, a2, a3
; RV64I-NEXT: sltu a3, a3, a6
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/constant64.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/constant64.mir
index 646152e2e4ed4..0f00bd0ced264 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/constant64.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/constant64.mir
@@ -159,8 +159,8 @@ body: |
; CHECK: liveins: $x10
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[LUI:%[0-9]+]]:gpr = LUI 524288
- ; CHECK-NEXT: [[ADDIW:%[0-9]+]]:gpr = ADDIW [[LUI]], 648
- ; CHECK-NEXT: $x10 = COPY [[ADDIW]]
+ ; CHECK-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[LUI]], 648
+ ; CHECK-NEXT: $x10 = COPY [[ADDI]]
; CHECK-NEXT: PseudoRET implicit $x10
%0:gprb(s64) = G_CONSTANT i64 -2147483000
$x10 = COPY %0(s64)
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir
index 8951e373ba7a9..3028b6476e20b 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir
@@ -1,8 +1,8 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 3
# RUN: llc -mtriple=riscv32 -mattr=+zfh -run-pass=instruction-select \
-# RUN: -simplify-mir -verify-machineinstrs %s -o - | FileCheck %s --check-prefixes=CHECK,RV32
+# RUN: -simplify-mir -verify-machineinstrs %s -o - | FileCheck %s
# RUN: llc -mtriple=riscv64 -mattr=+zfh -run-pass=instruction-select \
-# RUN: -simplify-mir -verify-machineinstrs %s -o - | FileCheck %s --check-prefixes=CHECK,RV64
+# RUN: -simplify-mir -verify-machineinstrs %s -o - | FileCheck %s
---
name: half_imm
@@ -10,19 +10,12 @@ legalized: true
regBankSelected: true
body: |
bb.1:
- ; RV32-LABEL: name: half_imm
- ; RV32: [[LUI:%[0-9]+]]:gpr = LUI 4
- ; RV32-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[LUI]], 584
- ; RV32-NEXT: [[FMV_H_X:%[0-9]+]]:fpr16 = FMV_H_X [[ADDI]]
- ; RV32-NEXT: $f10_h = COPY [[FMV_H_X]]
- ; RV32-NEXT: PseudoRET implicit $f10_h
- ;
- ; RV64-LABEL: name: half_imm
- ; RV64: [[LUI:%[0-9]+]]:gpr = LUI 4
- ; RV64-NEXT: [[ADDIW:%[0-9]+]]:gpr = ADDIW [[LUI]], 584
- ; RV64-NEXT: [[FMV_H_X:%[0-9]+]]:fpr16 = FMV_H_X [[ADDIW]]
- ; RV64-NEXT: $f10_h = COPY [[FMV_H_X]]
- ; RV64-NEXT: PseudoRET implicit $f10_h
+ ; CHECK-LABEL: name: half_imm
+ ; CHECK: [[LUI:%[0-9]+]]:gpr = LUI 4
+ ; CHECK-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[LUI]], 584
+ ; CHECK-NEXT: [[FMV_H_X:%[0-9]+]]:fpr16 = FMV_H_X [[ADDI]]
+ ; CHECK-NEXT: $f10_h = COPY [[FMV_H_X]]
+ ; CHECK-NEXT: PseudoRET implicit $f10_h
%0:fprb(s16) = G_FCONSTANT half 0xH4248
$f10_h = COPY %0(s16)
PseudoRET implicit $f10_h
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant.mir
index 43f5ec1f57907..e82d4bcec48b1 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant.mir
@@ -10,19 +10,12 @@ legalized: true
regBankSelected: true
body: |
bb.1:
- ; RV32-LABEL: name: float_imm
- ; RV32: [[LUI:%[0-9]+]]:gpr = LUI 263313
- ; RV32-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[LUI]], -37
- ; RV32-NEXT: [[FMV_W_X:%[0-9]+]]:fpr32 = FMV_W_X [[ADDI]]
- ; RV32-NEXT: $f10_f = COPY [[FMV_W_X]]
- ; RV32-NEXT: PseudoRET implicit $f10_f
- ;
- ; RV64-LABEL: name: float_imm
- ; RV64: [[LUI:%[0-9]+]]:gpr = LUI 263313
- ; RV64-NEXT: [[ADDIW:%[0-9]+]]:gpr = ADDIW [[LUI]], -37
- ; RV64-NEXT: [[FMV_W_X:%[0-9]+]]:fpr32 = FMV_W_X [[ADDIW]]
- ; RV64-NEXT: $f10_f = COPY [[FMV_W_X]]
- ; RV64-NEXT: PseudoRET implicit $f10_f
+ ; CHECK-LABEL: name: float_imm
+ ; CHECK: [[LUI:%[0-9]+]]:gpr = LUI 263313
+ ; CHECK-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[LUI]], -37
+ ; CHECK-NEXT: [[FMV_W_X:%[0-9]+]]:fpr32 = FMV_W_X [[ADDI]]
+ ; CHECK-NEXT: $f10_f = COPY [[FMV_W_X]]
+ ; CHECK-NEXT: PseudoRET implicit $f10_f
%0:fprb(s32) = G_FCONSTANT float 0x400921FB60000000
$f10_f = COPY %0(s32)
PseudoRET implicit $f10_f
@@ -109,14 +102,14 @@ body: |
;
; RV64-LABEL: name: double_imm
; RV64: [[LUI:%[0-9]+]]:gpr = LUI 512
- ; RV64-NEXT: [[ADDIW:%[0-9]+]]:gpr = ADDIW [[LUI]], 1169
- ; RV64-NEXT: [[SLLI:%[0-9]+]]:gpr = SLLI [[ADDIW]], 15
- ; RV64-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[SLLI]], -299
- ; RV64-NEXT: [[SLLI1:%[0-9]+]]:gpr = SLLI [[ADDI]], 14
- ; RV64-NEXT: [[ADDI1:%[0-9]+]]:gpr = ADDI [[SLLI1]], 1091
- ; RV64-NEXT: [[SLLI2:%[0-9]+]]:gpr = SLLI [[ADDI1]], 12
- ; RV64-NEXT: [[ADDI2:%[0-9]+]]:gpr = ADDI [[SLLI2]], -744
- ; RV64-NEXT: [[FMV_D_X:%[0-9]+]]:fpr64 = FMV_D_X [[ADDI2]]
+ ; RV64-NEXT: [[ADDI:%[0-9]+]]:gpr = ADDI [[LUI]], 1169
+ ; RV64-NEXT: [[SLLI:%[0-9]+]]:gpr = SLLI [[ADDI]], 15
+ ; RV64-NEXT: [[ADDI1:%[0-9]+]]:gpr = ADDI [[SLLI]], -299
+ ; RV64-NEXT: [[SLLI1:%[0-9]+]]:gpr = SLLI [[ADDI1]], 14
+ ; RV64-NEXT: [[ADDI2:%[0-9]+]]:gpr = ADDI [[SLLI1]], 1091
+ ; RV64-NEXT: [[SLLI2:%[0-9]+]]:gpr = SLLI [[ADDI2]], 12
+ ; RV64-NEXT: [[ADDI3:%[0-9]+]]:gpr = ADDI [[SLLI2]], -744
+ ; RV64-NEXT: [[FMV_D_X:%[0-9]+]]:fpr64 = FMV_D_X [[ADDI3]]
; RV64-NEXT: $f10_d = COPY [[FMV_D_X]]
; RV64-NEXT: PseudoRET implicit $f10_d
%0:fprb(s64) = G_FCONSTANT double 0x400921FB54442D18
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll
index 8549a7c526e45..6fb3572774c52 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbb.ll
@@ -40,9 +40,9 @@ define signext i32 @ctlz_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: li a1, 32
@@ -97,9 +97,9 @@ define signext i32 @log2_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: li a1, 32
@@ -162,9 +162,9 @@ define signext i32 @log2_ceil_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: li a1, 32
@@ -221,9 +221,9 @@ define signext i32 @findLastSet_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: beqz s0, .LBB3_2
; RV64I-NEXT: # %bb.1:
@@ -292,9 +292,9 @@ define i32 @ctlz_lshr_i32(i32 signext %a) {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: li a1, 32
@@ -421,9 +421,9 @@ define signext i32 @cttz_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -464,9 +464,9 @@ define signext i32 @cttz_zero_undef_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -506,9 +506,9 @@ define signext i32 @findFirstSet_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: beqz s0, .LBB8_2
; RV64I-NEXT: # %bb.1:
@@ -562,9 +562,9 @@ define signext i32 @ffs_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: mv a1, a0
; RV64I-NEXT: li a0, 0
@@ -681,9 +681,9 @@ define signext i32 @ctpop_i32(i32 signext %a) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -737,9 +737,9 @@ define signext i32 @ctpop_i32_load(ptr %p) nounwind {
; RV64I-NEXT: sraiw a1, a0, 4
; RV64I-NEXT: addw a0, a1, a0
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: call __muldi3
; RV64I-NEXT: srliw a0, a0, 24
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -1185,7 +1185,7 @@ define i64 @bswap_i64(i64 %a) {
; RV64I-NEXT: srli a4, a0, 40
; RV64I-NEXT: or a1, a2, a1
; RV64I-NEXT: lui a2, 4080
-; RV64I-NEXT: addiw a3, a3, -256
+; RV64I-NEXT: addi a3, a3, -256
; RV64I-NEXT: and a4, a4, a3
; RV64I-NEXT: or a1, a1, a4
; RV64I-NEXT: srli a4, a0, 24
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll
index f413abffcdccc..cd59c9e01806d 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/rv64zbkb.ll
@@ -141,7 +141,7 @@ define signext i32 @packh_i32(i32 signext %a, i32 signext %b) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: zext.b a0, a0
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: slli a1, a1, 8
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: or a0, a1, a0
@@ -151,7 +151,7 @@ define signext i32 @packh_i32(i32 signext %a, i32 signext %b) nounwind {
; RV64ZBKB: # %bb.0:
; RV64ZBKB-NEXT: lui a2, 16
; RV64ZBKB-NEXT: zext.b a0, a0
-; RV64ZBKB-NEXT: addiw a2, a2, -256
+; RV64ZBKB-NEXT: addi a2, a2, -256
; RV64ZBKB-NEXT: slli a1, a1, 8
; RV64ZBKB-NEXT: and a1, a1, a2
; RV64ZBKB-NEXT: or a0, a1, a0
@@ -189,7 +189,7 @@ define i64 @packh_i64(i64 %a, i64 %b) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: zext.b a0, a0
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: slli a1, a1, 8
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: or a0, a1, a0
@@ -199,7 +199,7 @@ define i64 @packh_i64(i64 %a, i64 %b) nounwind {
; RV64ZBKB: # %bb.0:
; RV64ZBKB-NEXT: lui a2, 16
; RV64ZBKB-NEXT: zext.b a0, a0
-; RV64ZBKB-NEXT: addiw a2, a2, -256
+; RV64ZBKB-NEXT: addi a2, a2, -256
; RV64ZBKB-NEXT: slli a1, a1, 8
; RV64ZBKB-NEXT: and a1, a1, a2
; RV64ZBKB-NEXT: or a0, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/vararg.ll b/llvm/test/CodeGen/RISCV/GlobalISel/vararg.ll
index fc9be94988451..afef96db5e290 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/vararg.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/vararg.ll
@@ -1171,7 +1171,7 @@ define void @va3_caller() nounwind {
; RV64-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64-NEXT: lui a1, 5
; RV64-NEXT: li a0, 2
-; RV64-NEXT: addiw a2, a1, -480
+; RV64-NEXT: addi a2, a1, -480
; RV64-NEXT: li a1, 1111
; RV64-NEXT: call va3
; RV64-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -1203,7 +1203,7 @@ define void @va3_caller() nounwind {
; RV64-WITHFP-NEXT: addi s0, sp, 16
; RV64-WITHFP-NEXT: lui a1, 5
; RV64-WITHFP-NEXT: li a0, 2
-; RV64-WITHFP-NEXT: addiw a2, a1, -480
+; RV64-WITHFP-NEXT: addi a2, a1, -480
; RV64-WITHFP-NEXT: li a1, 1111
; RV64-WITHFP-NEXT: call va3
; RV64-WITHFP-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -1618,7 +1618,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; RV64-LABEL: va_large_stack:
; RV64: # %bb.0:
; RV64-NEXT: lui a0, 24414
-; RV64-NEXT: addiw a0, a0, 336
+; RV64-NEXT: addi a0, a0, 336
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: .cfi_def_cfa_offset 100000080
; RV64-NEXT: lui a0, 24414
@@ -1635,7 +1635,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; RV64-NEXT: sd a4, 304(a0)
; RV64-NEXT: addi a0, sp, 8
; RV64-NEXT: lui a1, 24414
-; RV64-NEXT: addiw a1, a1, 280
+; RV64-NEXT: addi a1, a1, 280
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: sd a1, 8(sp)
; RV64-NEXT: lw a0, 4(a0)
@@ -1657,7 +1657,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; RV64-NEXT: sw a2, 12(sp)
; RV64-NEXT: lw a0, 0(a0)
; RV64-NEXT: lui a1, 24414
-; RV64-NEXT: addiw a1, a1, 336
+; RV64-NEXT: addi a1, a1, 336
; RV64-NEXT: add sp, sp, a1
; RV64-NEXT: .cfi_def_cfa_offset 0
; RV64-NEXT: ret
@@ -1714,10 +1714,10 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; RV64-WITHFP-NEXT: addi s0, sp, 1968
; RV64-WITHFP-NEXT: .cfi_def_cfa s0, 64
; RV64-WITHFP-NEXT: lui a0, 24414
-; RV64-WITHFP-NEXT: addiw a0, a0, -1680
+; RV64-WITHFP-NEXT: addi a0, a0, -1680
; RV64-WITHFP-NEXT: sub sp, sp, a0
; RV64-WITHFP-NEXT: lui a0, 24414
-; RV64-WITHFP-NEXT: addiw a0, a0, 288
+; RV64-WITHFP-NEXT: addi a0, a0, 288
; RV64-WITHFP-NEXT: sub a0, s0, a0
; RV64-WITHFP-NEXT: sd a1, 8(s0)
; RV64-WITHFP-NEXT: sd a2, 16(s0)
@@ -1738,7 +1738,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; RV64-WITHFP-NEXT: sw a3, 4(a0)
; RV64-WITHFP-NEXT: lw a0, 0(a1)
; RV64-WITHFP-NEXT: lui a1, 24414
-; RV64-WITHFP-NEXT: addiw a1, a1, -1680
+; RV64-WITHFP-NEXT: addi a1, a1, -1680
; RV64-WITHFP-NEXT: add sp, sp, a1
; RV64-WITHFP-NEXT: .cfi_def_cfa sp, 2032
; RV64-WITHFP-NEXT: ld ra, 1960(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/abdu-neg.ll b/llvm/test/CodeGen/RISCV/abdu-neg.ll
index 9fa142ee2aa1e..911db598eb831 100644
--- a/llvm/test/CodeGen/RISCV/abdu-neg.ll
+++ b/llvm/test/CodeGen/RISCV/abdu-neg.ll
@@ -167,7 +167,7 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_ext_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: sub a0, a0, a1
@@ -271,7 +271,7 @@ define i16 @abd_ext_i16_undef(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_ext_i16_undef:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: sub a0, a0, a1
@@ -1125,45 +1125,25 @@ define i8 @abd_minmax_i8(i8 %a, i8 %b) nounwind {
}
define i16 @abd_minmax_i16(i16 %a, i16 %b) nounwind {
-; RV32I-LABEL: abd_minmax_i16:
-; RV32I: # %bb.0:
-; RV32I-NEXT: lui a2, 16
-; RV32I-NEXT: addi a2, a2, -1
-; RV32I-NEXT: and a1, a1, a2
-; RV32I-NEXT: and a0, a0, a2
-; RV32I-NEXT: mv a2, a0
-; RV32I-NEXT: bgeu a0, a1, .LBB14_3
-; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: bgeu a1, a0, .LBB14_4
-; RV32I-NEXT: .LBB14_2:
-; RV32I-NEXT: sub a0, a2, a0
-; RV32I-NEXT: ret
-; RV32I-NEXT: .LBB14_3:
-; RV32I-NEXT: mv a2, a1
-; RV32I-NEXT: bltu a1, a0, .LBB14_2
-; RV32I-NEXT: .LBB14_4:
-; RV32I-NEXT: sub a0, a2, a1
-; RV32I-NEXT: ret
-;
-; RV64I-LABEL: abd_minmax_i16:
-; RV64I: # %bb.0:
-; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
-; RV64I-NEXT: and a1, a1, a2
-; RV64I-NEXT: and a0, a0, a2
-; RV64I-NEXT: mv a2, a0
-; RV64I-NEXT: bgeu a0, a1, .LBB14_3
-; RV64I-NEXT: # %bb.1:
-; RV64I-NEXT: bgeu a1, a0, .LBB14_4
-; RV64I-NEXT: .LBB14_2:
-; RV64I-NEXT: sub a0, a2, a0
-; RV64I-NEXT: ret
-; RV64I-NEXT: .LBB14_3:
-; RV64I-NEXT: mv a2, a1
-; RV64I-NEXT: bltu a1, a0, .LBB14_2
-; RV64I-NEXT: .LBB14_4:
-; RV64I-NEXT: sub a0, a2, a1
-; RV64I-NEXT: ret
+; NOZBB-LABEL: abd_minmax_i16:
+; NOZBB: # %bb.0:
+; NOZBB-NEXT: lui a2, 16
+; NOZBB-NEXT: addi a2, a2, -1
+; NOZBB-NEXT: and a1, a1, a2
+; NOZBB-NEXT: and a0, a0, a2
+; NOZBB-NEXT: mv a2, a0
+; NOZBB-NEXT: bgeu a0, a1, .LBB14_3
+; NOZBB-NEXT: # %bb.1:
+; NOZBB-NEXT: bgeu a1, a0, .LBB14_4
+; NOZBB-NEXT: .LBB14_2:
+; NOZBB-NEXT: sub a0, a2, a0
+; NOZBB-NEXT: ret
+; NOZBB-NEXT: .LBB14_3:
+; NOZBB-NEXT: mv a2, a1
+; NOZBB-NEXT: bltu a1, a0, .LBB14_2
+; NOZBB-NEXT: .LBB14_4:
+; NOZBB-NEXT: sub a0, a2, a1
+; NOZBB-NEXT: ret
;
; ZBB-LABEL: abd_minmax_i16:
; ZBB: # %bb.0:
@@ -1628,33 +1608,19 @@ define i8 @abd_cmp_i8(i8 %a, i8 %b) nounwind {
}
define i16 @abd_cmp_i16(i16 %a, i16 %b) nounwind {
-; RV32I-LABEL: abd_cmp_i16:
-; RV32I: # %bb.0:
-; RV32I-NEXT: lui a2, 16
-; RV32I-NEXT: addi a2, a2, -1
-; RV32I-NEXT: and a3, a1, a2
-; RV32I-NEXT: and a2, a0, a2
-; RV32I-NEXT: bltu a2, a3, .LBB19_2
-; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: sub a0, a1, a0
-; RV32I-NEXT: ret
-; RV32I-NEXT: .LBB19_2:
-; RV32I-NEXT: sub a0, a0, a1
-; RV32I-NEXT: ret
-;
-; RV64I-LABEL: abd_cmp_i16:
-; RV64I: # %bb.0:
-; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
-; RV64I-NEXT: and a3, a1, a2
-; RV64I-NEXT: and a2, a0, a2
-; RV64I-NEXT: bltu a2, a3, .LBB19_2
-; RV64I-NEXT: # %bb.1:
-; RV64I-NEXT: sub a0, a1, a0
-; RV64I-NEXT: ret
-; RV64I-NEXT: .LBB19_2:
-; RV64I-NEXT: sub a0, a0, a1
-; RV64I-NEXT: ret
+; NOZBB-LABEL: abd_cmp_i16:
+; NOZBB: # %bb.0:
+; NOZBB-NEXT: lui a2, 16
+; NOZBB-NEXT: addi a2, a2, -1
+; NOZBB-NEXT: and a3, a1, a2
+; NOZBB-NEXT: and a2, a0, a2
+; NOZBB-NEXT: bltu a2, a3, .LBB19_2
+; NOZBB-NEXT: # %bb.1:
+; NOZBB-NEXT: sub a0, a1, a0
+; NOZBB-NEXT: ret
+; NOZBB-NEXT: .LBB19_2:
+; NOZBB-NEXT: sub a0, a0, a1
+; NOZBB-NEXT: ret
;
; ZBB-LABEL: abd_cmp_i16:
; ZBB: # %bb.0:
diff --git a/llvm/test/CodeGen/RISCV/abdu.ll b/llvm/test/CodeGen/RISCV/abdu.ll
index 614d9c20ac574..6ef172a6cd618 100644
--- a/llvm/test/CodeGen/RISCV/abdu.ll
+++ b/llvm/test/CodeGen/RISCV/abdu.ll
@@ -137,7 +137,7 @@ define i16 @abd_ext_i16(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_ext_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sub a0, a0, a1
@@ -228,7 +228,7 @@ define i16 @abd_ext_i16_undef(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_ext_i16_undef:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sub a0, a0, a1
@@ -992,7 +992,7 @@ define i16 @abd_minmax_i16(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_minmax_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sub a0, a0, a1
@@ -1382,7 +1382,7 @@ define i16 @abd_cmp_i16(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_cmp_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sub a0, a0, a1
@@ -1776,7 +1776,7 @@ define i16 @abd_select_i16(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: abd_select_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sub a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/addimm-mulimm.ll b/llvm/test/CodeGen/RISCV/addimm-mulimm.ll
index 8e445511b6119..aac355e3f055b 100644
--- a/llvm/test/CodeGen/RISCV/addimm-mulimm.ll
+++ b/llvm/test/CodeGen/RISCV/addimm-mulimm.ll
@@ -153,7 +153,7 @@ define i64 @add_mul_combine_accept_b3(i64 %x) {
; RV64IMB-NEXT: slli a0, a0, 5
; RV64IMB-NEXT: sub a0, a0, a1
; RV64IMB-NEXT: lui a1, 50
-; RV64IMB-NEXT: addiw a1, a1, 1119
+; RV64IMB-NEXT: addi a1, a1, 1119
; RV64IMB-NEXT: add a0, a0, a1
; RV64IMB-NEXT: ret
%tmp0 = add i64 %x, 8953
@@ -685,7 +685,7 @@ define i64 @mul3000_add8990_c(i64 %x) {
; RV64IMB: # %bb.0:
; RV64IMB-NEXT: addi a0, a0, 3
; RV64IMB-NEXT: lui a1, 1
-; RV64IMB-NEXT: addiw a1, a1, -1096
+; RV64IMB-NEXT: addi a1, a1, -1096
; RV64IMB-NEXT: mul a0, a0, a1
; RV64IMB-NEXT: addi a0, a0, -10
; RV64IMB-NEXT: ret
@@ -761,7 +761,7 @@ define i64 @mul3000_sub8990_c(i64 %x) {
; RV64IMB: # %bb.0:
; RV64IMB-NEXT: addi a0, a0, -3
; RV64IMB-NEXT: lui a1, 1
-; RV64IMB-NEXT: addiw a1, a1, -1096
+; RV64IMB-NEXT: addi a1, a1, -1096
; RV64IMB-NEXT: mul a0, a0, a1
; RV64IMB-NEXT: addi a0, a0, 10
; RV64IMB-NEXT: ret
@@ -837,7 +837,7 @@ define i64 @mulneg3000_add8990_c(i64 %x) {
; RV64IMB: # %bb.0:
; RV64IMB-NEXT: addi a0, a0, -3
; RV64IMB-NEXT: lui a1, 1048575
-; RV64IMB-NEXT: addiw a1, a1, 1096
+; RV64IMB-NEXT: addi a1, a1, 1096
; RV64IMB-NEXT: mul a0, a0, a1
; RV64IMB-NEXT: addi a0, a0, -10
; RV64IMB-NEXT: ret
@@ -914,7 +914,7 @@ define i64 @mulneg3000_sub8990_c(i64 %x) {
; RV64IMB: # %bb.0:
; RV64IMB-NEXT: addi a0, a0, 3
; RV64IMB-NEXT: lui a1, 1048575
-; RV64IMB-NEXT: addiw a1, a1, 1096
+; RV64IMB-NEXT: addi a1, a1, 1096
; RV64IMB-NEXT: mul a0, a0, a1
; RV64IMB-NEXT: addi a0, a0, 10
; RV64IMB-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/alu16.ll b/llvm/test/CodeGen/RISCV/alu16.ll
index 41f26526ef03e..b5a2524410c0b 100644
--- a/llvm/test/CodeGen/RISCV/alu16.ll
+++ b/llvm/test/CodeGen/RISCV/alu16.ll
@@ -286,7 +286,7 @@ define i16 @sltu(i16 %a, i16 %b) nounwind {
; RV64I-LABEL: sltu:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sltu a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/atomic-rmw.ll b/llvm/test/CodeGen/RISCV/atomic-rmw.ll
index 1e5acd2575b88..b0510f85c49a7 100644
--- a/llvm/test/CodeGen/RISCV/atomic-rmw.ll
+++ b/llvm/test/CodeGen/RISCV/atomic-rmw.ll
@@ -11018,7 +11018,7 @@ define i16 @atomicrmw_xchg_minus_1_i16_monotonic(ptr %a) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: li a2, 0
; RV64I-NEXT: call __atomic_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -11102,7 +11102,7 @@ define i16 @atomicrmw_xchg_minus_1_i16_acquire(ptr %a) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: li a2, 2
; RV64I-NEXT: call __atomic_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -11208,7 +11208,7 @@ define i16 @atomicrmw_xchg_minus_1_i16_release(ptr %a) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: li a2, 3
; RV64I-NEXT: call __atomic_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -11314,7 +11314,7 @@ define i16 @atomicrmw_xchg_minus_1_i16_acq_rel(ptr %a) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: li a2, 4
; RV64I-NEXT: call __atomic_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -11420,7 +11420,7 @@ define i16 @atomicrmw_xchg_minus_1_i16_seq_cst(ptr %a) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: li a2, 5
; RV64I-NEXT: call __atomic_exchange_2
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
@@ -17923,7 +17923,7 @@ define i16 @atomicrmw_umax_i16_monotonic(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB120_2
; RV64I-NEXT: .LBB120_1: # %atomicrmw.start
@@ -18125,7 +18125,7 @@ define i16 @atomicrmw_umax_i16_acquire(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB121_2
; RV64I-NEXT: .LBB121_1: # %atomicrmw.start
@@ -18377,7 +18377,7 @@ define i16 @atomicrmw_umax_i16_release(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB122_2
; RV64I-NEXT: .LBB122_1: # %atomicrmw.start
@@ -18629,7 +18629,7 @@ define i16 @atomicrmw_umax_i16_acq_rel(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB123_2
; RV64I-NEXT: .LBB123_1: # %atomicrmw.start
@@ -18856,7 +18856,7 @@ define i16 @atomicrmw_umax_i16_seq_cst(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB124_2
; RV64I-NEXT: .LBB124_1: # %atomicrmw.start
@@ -19033,7 +19033,7 @@ define i16 @atomicrmw_umin_i16_monotonic(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB125_2
; RV64I-NEXT: .LBB125_1: # %atomicrmw.start
@@ -19235,7 +19235,7 @@ define i16 @atomicrmw_umin_i16_acquire(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB126_2
; RV64I-NEXT: .LBB126_1: # %atomicrmw.start
@@ -19487,7 +19487,7 @@ define i16 @atomicrmw_umin_i16_release(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB127_2
; RV64I-NEXT: .LBB127_1: # %atomicrmw.start
@@ -19739,7 +19739,7 @@ define i16 @atomicrmw_umin_i16_acq_rel(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB128_2
; RV64I-NEXT: .LBB128_1: # %atomicrmw.start
@@ -19966,7 +19966,7 @@ define i16 @atomicrmw_umin_i16_seq_cst(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB129_2
; RV64I-NEXT: .LBB129_1: # %atomicrmw.start
diff --git a/llvm/test/CodeGen/RISCV/atomic-signext.ll b/llvm/test/CodeGen/RISCV/atomic-signext.ll
index b9702e9fe0fc2..bebc097deb192 100644
--- a/llvm/test/CodeGen/RISCV/atomic-signext.ll
+++ b/llvm/test/CodeGen/RISCV/atomic-signext.ll
@@ -2023,7 +2023,7 @@ define signext i16 @atomicrmw_umax_i16_monotonic(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB23_2
; RV64I-NEXT: .LBB23_1: # %atomicrmw.start
@@ -2171,7 +2171,7 @@ define signext i16 @atomicrmw_umin_i16_monotonic(ptr %a, i16 %b) nounwind {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB24_2
; RV64I-NEXT: .LBB24_1: # %atomicrmw.start
diff --git a/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll b/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll
index 2db6f80f4fd61..27704d107f93d 100644
--- a/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll
+++ b/llvm/test/CodeGen/RISCV/atomicrmw-cond-sub-clamp.ll
@@ -292,7 +292,7 @@ define i16 @atomicrmw_usub_cond_i16(ptr %ptr, i16 %val) {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: .LBB1_1: # %atomicrmw.start
; RV64I-NEXT: # =>This Inner Loop Header: Depth=1
@@ -331,7 +331,7 @@ define i16 @atomicrmw_usub_cond_i16(ptr %ptr, i16 %val) {
; RV64IA-NEXT: slli a5, a0, 3
; RV64IA-NEXT: lui a3, 16
; RV64IA-NEXT: andi a0, a5, 24
-; RV64IA-NEXT: addiw a3, a3, -1
+; RV64IA-NEXT: addi a3, a3, -1
; RV64IA-NEXT: lw a4, 0(a2)
; RV64IA-NEXT: sllw a5, a3, a5
; RV64IA-NEXT: not a5, a5
@@ -955,7 +955,7 @@ define i16 @atomicrmw_usub_sat_i16(ptr %ptr, i16 %val) {
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lhu a3, 0(a0)
; RV64I-NEXT: lui s1, 16
-; RV64I-NEXT: addiw s1, s1, -1
+; RV64I-NEXT: addi s1, s1, -1
; RV64I-NEXT: and s2, a1, s1
; RV64I-NEXT: .LBB5_1: # %atomicrmw.start
; RV64I-NEXT: # =>This Inner Loop Header: Depth=1
@@ -992,7 +992,7 @@ define i16 @atomicrmw_usub_sat_i16(ptr %ptr, i16 %val) {
; RV64IA-NEXT: slli a5, a0, 3
; RV64IA-NEXT: lui a3, 16
; RV64IA-NEXT: andi a0, a5, 24
-; RV64IA-NEXT: addiw a3, a3, -1
+; RV64IA-NEXT: addi a3, a3, -1
; RV64IA-NEXT: lw a4, 0(a2)
; RV64IA-NEXT: sllw a5, a3, a5
; RV64IA-NEXT: not a5, a5
diff --git a/llvm/test/CodeGen/RISCV/atomicrmw-uinc-udec-wrap.ll b/llvm/test/CodeGen/RISCV/atomicrmw-uinc-udec-wrap.ll
index ae1db4f1d62da..ada1933d91d60 100644
--- a/llvm/test/CodeGen/RISCV/atomicrmw-uinc-udec-wrap.ll
+++ b/llvm/test/CodeGen/RISCV/atomicrmw-uinc-udec-wrap.ll
@@ -274,7 +274,7 @@ define i16 @atomicrmw_uinc_wrap_i16(ptr %ptr, i16 %val) {
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lhu a3, 0(a0)
; RV64I-NEXT: lui s1, 16
-; RV64I-NEXT: addiw s1, s1, -1
+; RV64I-NEXT: addi s1, s1, -1
; RV64I-NEXT: and s2, a1, s1
; RV64I-NEXT: .LBB1_1: # %atomicrmw.start
; RV64I-NEXT: # =>This Inner Loop Header: Depth=1
@@ -311,7 +311,7 @@ define i16 @atomicrmw_uinc_wrap_i16(ptr %ptr, i16 %val) {
; RV64IA-NEXT: slli a5, a0, 3
; RV64IA-NEXT: lui a3, 16
; RV64IA-NEXT: andi a0, a5, 24
-; RV64IA-NEXT: addiw a3, a3, -1
+; RV64IA-NEXT: addi a3, a3, -1
; RV64IA-NEXT: lw a4, 0(a2)
; RV64IA-NEXT: sllw a5, a3, a5
; RV64IA-NEXT: not a5, a5
@@ -1001,7 +1001,7 @@ define i16 @atomicrmw_udec_wrap_i16(ptr %ptr, i16 %val) {
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lhu a1, 0(a0)
; RV64I-NEXT: lui s2, 16
-; RV64I-NEXT: addiw s2, s2, -1
+; RV64I-NEXT: addi s2, s2, -1
; RV64I-NEXT: and s3, s0, s2
; RV64I-NEXT: j .LBB5_2
; RV64I-NEXT: .LBB5_1: # %atomicrmw.start
@@ -1048,7 +1048,7 @@ define i16 @atomicrmw_udec_wrap_i16(ptr %ptr, i16 %val) {
; RV64IA-NEXT: slli a5, a0, 3
; RV64IA-NEXT: lui a3, 16
; RV64IA-NEXT: andi a0, a5, 24
-; RV64IA-NEXT: addiw a3, a3, -1
+; RV64IA-NEXT: addi a3, a3, -1
; RV64IA-NEXT: lw a4, 0(a2)
; RV64IA-NEXT: sllw a5, a3, a5
; RV64IA-NEXT: not a5, a5
diff --git a/llvm/test/CodeGen/RISCV/avgceilu.ll b/llvm/test/CodeGen/RISCV/avgceilu.ll
index 735bd19909d5f..1c1d1cbfd12cb 100644
--- a/llvm/test/CodeGen/RISCV/avgceilu.ll
+++ b/llvm/test/CodeGen/RISCV/avgceilu.ll
@@ -75,7 +75,7 @@ define i16 @test_fixed_i16(i16 %a0, i16 %a1) nounwind {
; RV64I-LABEL: test_fixed_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: add a0, a0, a1
@@ -104,7 +104,7 @@ define i16 @test_ext_i16(i16 %a0, i16 %a1) nounwind {
; RV64I-LABEL: test_ext_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: add a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/avgflooru.ll b/llvm/test/CodeGen/RISCV/avgflooru.ll
index 8a69c9393c87a..2e56f3359434c 100644
--- a/llvm/test/CodeGen/RISCV/avgflooru.ll
+++ b/llvm/test/CodeGen/RISCV/avgflooru.ll
@@ -69,7 +69,7 @@ define i16 @test_fixed_i16(i16 %a0, i16 %a1) nounwind {
; RV64I-LABEL: test_fixed_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: add a0, a0, a1
@@ -96,7 +96,7 @@ define i16 @test_ext_i16(i16 %a0, i16 %a1) nounwind {
; RV64I-LABEL: test_ext_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: add a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/bittest.ll b/llvm/test/CodeGen/RISCV/bittest.ll
index d69ab0550a034..fa6892be44d97 100644
--- a/llvm/test/CodeGen/RISCV/bittest.ll
+++ b/llvm/test/CodeGen/RISCV/bittest.ll
@@ -271,19 +271,12 @@ define i1 @bittest_constant_by_var_shr_i32(i32 signext %b) nounwind {
; RV64I-NEXT: andi a0, a0, 1
; RV64I-NEXT: ret
;
-; RV32ZBS-LABEL: bittest_constant_by_var_shr_i32:
-; RV32ZBS: # %bb.0:
-; RV32ZBS-NEXT: lui a1, 301408
-; RV32ZBS-NEXT: addi a1, a1, 722
-; RV32ZBS-NEXT: bext a0, a1, a0
-; RV32ZBS-NEXT: ret
-;
-; RV64ZBS-LABEL: bittest_constant_by_var_shr_i32:
-; RV64ZBS: # %bb.0:
-; RV64ZBS-NEXT: lui a1, 301408
-; RV64ZBS-NEXT: addiw a1, a1, 722
-; RV64ZBS-NEXT: bext a0, a1, a0
-; RV64ZBS-NEXT: ret
+; ZBS-LABEL: bittest_constant_by_var_shr_i32:
+; ZBS: # %bb.0:
+; ZBS-NEXT: lui a1, 301408
+; ZBS-NEXT: addi a1, a1, 722
+; ZBS-NEXT: bext a0, a1, a0
+; ZBS-NEXT: ret
;
; RV32XTHEADBS-LABEL: bittest_constant_by_var_shr_i32:
; RV32XTHEADBS: # %bb.0:
@@ -324,19 +317,12 @@ define i1 @bittest_constant_by_var_shl_i32(i32 signext %b) nounwind {
; RV64I-NEXT: andi a0, a0, 1
; RV64I-NEXT: ret
;
-; RV32ZBS-LABEL: bittest_constant_by_var_shl_i32:
-; RV32ZBS: # %bb.0:
-; RV32ZBS-NEXT: lui a1, 301408
-; RV32ZBS-NEXT: addi a1, a1, 722
-; RV32ZBS-NEXT: bext a0, a1, a0
-; RV32ZBS-NEXT: ret
-;
-; RV64ZBS-LABEL: bittest_constant_by_var_shl_i32:
-; RV64ZBS: # %bb.0:
-; RV64ZBS-NEXT: lui a1, 301408
-; RV64ZBS-NEXT: addiw a1, a1, 722
-; RV64ZBS-NEXT: bext a0, a1, a0
-; RV64ZBS-NEXT: ret
+; ZBS-LABEL: bittest_constant_by_var_shl_i32:
+; ZBS: # %bb.0:
+; ZBS-NEXT: lui a1, 301408
+; ZBS-NEXT: addi a1, a1, 722
+; ZBS-NEXT: bext a0, a1, a0
+; ZBS-NEXT: ret
;
; RV32XTHEADBS-LABEL: bittest_constant_by_var_shl_i32:
; RV32XTHEADBS: # %bb.0:
@@ -374,7 +360,7 @@ define i1 @bittest_constant_by_var_shr_i64(i64 %b) nounwind {
; RV64I-LABEL: bittest_constant_by_var_shr_i64:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 301408
-; RV64I-NEXT: addiw a1, a1, 722
+; RV64I-NEXT: addi a1, a1, 722
; RV64I-NEXT: srl a0, a1, a0
; RV64I-NEXT: andi a0, a0, 1
; RV64I-NEXT: ret
@@ -382,14 +368,14 @@ define i1 @bittest_constant_by_var_shr_i64(i64 %b) nounwind {
; RV64ZBS-LABEL: bittest_constant_by_var_shr_i64:
; RV64ZBS: # %bb.0:
; RV64ZBS-NEXT: lui a1, 301408
-; RV64ZBS-NEXT: addiw a1, a1, 722
+; RV64ZBS-NEXT: addi a1, a1, 722
; RV64ZBS-NEXT: bext a0, a1, a0
; RV64ZBS-NEXT: ret
;
; RV64XTHEADBS-LABEL: bittest_constant_by_var_shr_i64:
; RV64XTHEADBS: # %bb.0:
; RV64XTHEADBS-NEXT: lui a1, 301408
-; RV64XTHEADBS-NEXT: addiw a1, a1, 722
+; RV64XTHEADBS-NEXT: addi a1, a1, 722
; RV64XTHEADBS-NEXT: srl a0, a1, a0
; RV64XTHEADBS-NEXT: andi a0, a0, 1
; RV64XTHEADBS-NEXT: ret
@@ -414,7 +400,7 @@ define i1 @bittest_constant_by_var_shl_i64(i64 %b) nounwind {
; RV64I-LABEL: bittest_constant_by_var_shl_i64:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 301408
-; RV64I-NEXT: addiw a1, a1, 722
+; RV64I-NEXT: addi a1, a1, 722
; RV64I-NEXT: srl a0, a1, a0
; RV64I-NEXT: andi a0, a0, 1
; RV64I-NEXT: ret
@@ -422,14 +408,14 @@ define i1 @bittest_constant_by_var_shl_i64(i64 %b) nounwind {
; RV64ZBS-LABEL: bittest_constant_by_var_shl_i64:
; RV64ZBS: # %bb.0:
; RV64ZBS-NEXT: lui a1, 301408
-; RV64ZBS-NEXT: addiw a1, a1, 722
+; RV64ZBS-NEXT: addi a1, a1, 722
; RV64ZBS-NEXT: bext a0, a1, a0
; RV64ZBS-NEXT: ret
;
; RV64XTHEADBS-LABEL: bittest_constant_by_var_shl_i64:
; RV64XTHEADBS: # %bb.0:
; RV64XTHEADBS-NEXT: lui a1, 301408
-; RV64XTHEADBS-NEXT: addiw a1, a1, 722
+; RV64XTHEADBS-NEXT: addi a1, a1, 722
; RV64XTHEADBS-NEXT: srl a0, a1, a0
; RV64XTHEADBS-NEXT: andi a0, a0, 1
; RV64XTHEADBS-NEXT: ret
@@ -462,7 +448,7 @@ define void @bittest_switch(i32 signext %0) {
; RV64I-NEXT: bltu a1, a0, .LBB14_3
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: lui a1, 2048
-; RV64I-NEXT: addiw a1, a1, 51
+; RV64I-NEXT: addi a1, a1, 51
; RV64I-NEXT: slli a1, a1, 8
; RV64I-NEXT: srl a0, a1, a0
; RV64I-NEXT: andi a0, a0, 1
@@ -492,7 +478,7 @@ define void @bittest_switch(i32 signext %0) {
; RV64ZBS-NEXT: bltu a1, a0, .LBB14_3
; RV64ZBS-NEXT: # %bb.1:
; RV64ZBS-NEXT: lui a1, 2048
-; RV64ZBS-NEXT: addiw a1, a1, 51
+; RV64ZBS-NEXT: addi a1, a1, 51
; RV64ZBS-NEXT: slli a1, a1, 8
; RV64ZBS-NEXT: bext a0, a1, a0
; RV64ZBS-NEXT: beqz a0, .LBB14_3
@@ -522,7 +508,7 @@ define void @bittest_switch(i32 signext %0) {
; RV64XTHEADBS-NEXT: bltu a1, a0, .LBB14_3
; RV64XTHEADBS-NEXT: # %bb.1:
; RV64XTHEADBS-NEXT: lui a1, 2048
-; RV64XTHEADBS-NEXT: addiw a1, a1, 51
+; RV64XTHEADBS-NEXT: addi a1, a1, 51
; RV64XTHEADBS-NEXT: slli a1, a1, 8
; RV64XTHEADBS-NEXT: srl a0, a1, a0
; RV64XTHEADBS-NEXT: andi a0, a0, 1
diff --git a/llvm/test/CodeGen/RISCV/branch-relaxation-rv64.ll b/llvm/test/CodeGen/RISCV/branch-relaxation-rv64.ll
index 23a862ec580ec..9c794a73dc636 100644
--- a/llvm/test/CodeGen/RISCV/branch-relaxation-rv64.ll
+++ b/llvm/test/CodeGen/RISCV/branch-relaxation-rv64.ll
@@ -189,7 +189,7 @@ define void @relax_jal_spill_64() {
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: beq t5, t6, .LBB2_1
; CHECK-NEXT: # %bb.3:
-; CHECK-NEXT: sd s11, 0(sp)
+; CHECK-NEXT: sd s11, 0(sp) # 8-byte Folded Spill
; CHECK-NEXT: jump .LBB2_4, s11
; CHECK-NEXT: .LBB2_1: # %branch_1
; CHECK-NEXT: #APP
@@ -197,7 +197,7 @@ define void @relax_jal_spill_64() {
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: j .LBB2_2
; CHECK-NEXT: .LBB2_4: # %branch_2
-; CHECK-NEXT: ld s11, 0(sp)
+; CHECK-NEXT: ld s11, 0(sp) # 8-byte Folded Reload
; CHECK-NEXT: .LBB2_2: # %branch_2
; CHECK-NEXT: #APP
; CHECK-NEXT: # reg use ra
@@ -419,7 +419,7 @@ define void @relax_jal_spill_64_adjust_spill_slot() {
; CHECK-NEXT: addi s0, sp, 2032
; CHECK-NEXT: .cfi_def_cfa s0, 0
; CHECK-NEXT: lui a0, 2
-; CHECK-NEXT: addiw a0, a0, -2032
+; CHECK-NEXT: addi a0, a0, -2032
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: srli a0, sp, 12
; CHECK-NEXT: slli sp, a0, 12
@@ -509,7 +509,7 @@ define void @relax_jal_spill_64_adjust_spill_slot() {
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: beq t5, t6, .LBB3_1
; CHECK-NEXT: # %bb.3:
-; CHECK-NEXT: sd s11, 0(sp)
+; CHECK-NEXT: sd s11, 0(sp) # 8-byte Folded Spill
; CHECK-NEXT: jump .LBB3_4, s11
; CHECK-NEXT: .LBB3_1: # %branch_1
; CHECK-NEXT: #APP
@@ -517,7 +517,7 @@ define void @relax_jal_spill_64_adjust_spill_slot() {
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: j .LBB3_2
; CHECK-NEXT: .LBB3_4: # %branch_2
-; CHECK-NEXT: ld s11, 0(sp)
+; CHECK-NEXT: ld s11, 0(sp) # 8-byte Folded Reload
; CHECK-NEXT: .LBB3_2: # %branch_2
; CHECK-NEXT: #APP
; CHECK-NEXT: # reg use ra
@@ -825,7 +825,7 @@ define void @relax_jal_spill_64_restore_block_correspondence() {
; CHECK-NEXT: bne t5, t6, .LBB4_2
; CHECK-NEXT: j .LBB4_1
; CHECK-NEXT: .LBB4_8: # %dest_1
-; CHECK-NEXT: ld s11, 0(sp)
+; CHECK-NEXT: ld s11, 0(sp) # 8-byte Folded Reload
; CHECK-NEXT: .LBB4_1: # %dest_1
; CHECK-NEXT: #APP
; CHECK-NEXT: # dest 1
@@ -962,7 +962,7 @@ define void @relax_jal_spill_64_restore_block_correspondence() {
; CHECK-NEXT: .zero 1048576
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: # %bb.7: # %space
-; CHECK-NEXT: sd s11, 0(sp)
+; CHECK-NEXT: sd s11, 0(sp) # 8-byte Folded Spill
; CHECK-NEXT: jump .LBB4_8, s11
entry:
%ra = call i64 asm sideeffect "addi ra, x0, 1", "={ra}"()
diff --git a/llvm/test/CodeGen/RISCV/bswap-bitreverse.ll b/llvm/test/CodeGen/RISCV/bswap-bitreverse.ll
index 40a5772142345..531ad608b483a 100644
--- a/llvm/test/CodeGen/RISCV/bswap-bitreverse.ll
+++ b/llvm/test/CodeGen/RISCV/bswap-bitreverse.ll
@@ -73,7 +73,7 @@ define i32 @test_bswap_i32(i32 %a) nounwind {
; RV64I-NEXT: srli a1, a0, 8
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: srliw a3, a0, 24
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a2, a0, a2
; RV64I-NEXT: or a1, a1, a3
@@ -129,7 +129,7 @@ define i64 @test_bswap_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 56
; RV64I-NEXT: srli a4, a0, 24
; RV64I-NEXT: lui a5, 4080
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srli a3, a0, 8
@@ -296,20 +296,20 @@ define i16 @test_bitreverse_i16(i16 %a) nounwind {
; RV64I-NEXT: slli a0, a0, 48
; RV64I-NEXT: lui a2, 1
; RV64I-NEXT: srli a0, a0, 56
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 4
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 3
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slli a0, a0, 4
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 5
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: slli a0, a0, 2
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 1
@@ -350,18 +350,18 @@ define i16 @test_bitreverse_i16(i16 %a) nounwind {
; RV64ZBB-NEXT: rev8 a0, a0
; RV64ZBB-NEXT: lui a1, 15
; RV64ZBB-NEXT: srli a2, a0, 44
-; RV64ZBB-NEXT: addiw a1, a1, 240
+; RV64ZBB-NEXT: addi a1, a1, 240
; RV64ZBB-NEXT: and a1, a2, a1
; RV64ZBB-NEXT: lui a2, 3
; RV64ZBB-NEXT: srli a0, a0, 52
-; RV64ZBB-NEXT: addiw a2, a2, 819
+; RV64ZBB-NEXT: addi a2, a2, 819
; RV64ZBB-NEXT: andi a0, a0, -241
; RV64ZBB-NEXT: or a0, a0, a1
; RV64ZBB-NEXT: srli a1, a0, 2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: lui a2, 5
-; RV64ZBB-NEXT: addiw a2, a2, 1365
+; RV64ZBB-NEXT: addi a2, a2, 1365
; RV64ZBB-NEXT: slli a0, a0, 2
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 1
@@ -430,7 +430,7 @@ define i32 @test_bitreverse_i32(i32 %a) nounwind {
; RV64I-NEXT: srli a1, a0, 8
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: srliw a3, a0, 24
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a2, a0, a2
; RV64I-NEXT: slliw a0, a0, 24
@@ -439,14 +439,14 @@ define i32 @test_bitreverse_i32(i32 %a) nounwind {
; RV64I-NEXT: slli a2, a2, 8
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: or a0, a0, a1
; RV64I-NEXT: srli a1, a0, 4
; RV64I-NEXT: and a0, a0, a3
; RV64I-NEXT: and a1, a1, a3
; RV64I-NEXT: lui a3, 349525
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, 1365
; RV64I-NEXT: slliw a0, a0, 4
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 2
@@ -492,21 +492,21 @@ define i32 @test_bitreverse_i32(i32 %a) nounwind {
; RV64ZBB-NEXT: rev8 a0, a0
; RV64ZBB-NEXT: lui a1, 61681
; RV64ZBB-NEXT: srli a2, a0, 36
-; RV64ZBB-NEXT: addiw a1, a1, -241
+; RV64ZBB-NEXT: addi a1, a1, -241
; RV64ZBB-NEXT: and a1, a2, a1
; RV64ZBB-NEXT: lui a2, 986895
; RV64ZBB-NEXT: srli a0, a0, 28
; RV64ZBB-NEXT: addi a2, a2, 240
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: lui a2, 209715
-; RV64ZBB-NEXT: addiw a2, a2, 819
+; RV64ZBB-NEXT: addi a2, a2, 819
; RV64ZBB-NEXT: sext.w a0, a0
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: lui a2, 349525
-; RV64ZBB-NEXT: addiw a2, a2, 1365
+; RV64ZBB-NEXT: addi a2, a2, 1365
; RV64ZBB-NEXT: slliw a0, a0, 2
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 1
@@ -603,7 +603,7 @@ define i64 @test_bitreverse_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a6, a0, 8
; RV64I-NEXT: srliw a7, a0, 24
; RV64I-NEXT: lui t0, 61681
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: lui a3, 209715
@@ -614,9 +614,9 @@ define i64 @test_bitreverse_i64(i64 %a) nounwind {
; RV64I-NEXT: lui a6, 349525
; RV64I-NEXT: and a5, a0, a5
; RV64I-NEXT: slli a7, a7, 32
-; RV64I-NEXT: addiw t0, t0, -241
-; RV64I-NEXT: addiw a3, a3, 819
-; RV64I-NEXT: addiw a6, a6, 1365
+; RV64I-NEXT: addi t0, t0, -241
+; RV64I-NEXT: addi a3, a3, 819
+; RV64I-NEXT: addi a6, a6, 1365
; RV64I-NEXT: slli a5, a5, 24
; RV64I-NEXT: or a5, a5, a7
; RV64I-NEXT: slli a7, t0, 32
@@ -697,9 +697,9 @@ define i64 @test_bitreverse_i64(i64 %a) nounwind {
; RV64ZBB-NEXT: lui a1, 61681
; RV64ZBB-NEXT: lui a2, 209715
; RV64ZBB-NEXT: lui a3, 349525
-; RV64ZBB-NEXT: addiw a1, a1, -241
-; RV64ZBB-NEXT: addiw a2, a2, 819
-; RV64ZBB-NEXT: addiw a3, a3, 1365
+; RV64ZBB-NEXT: addi a1, a1, -241
+; RV64ZBB-NEXT: addi a2, a2, 819
+; RV64ZBB-NEXT: addi a3, a3, 1365
; RV64ZBB-NEXT: slli a4, a1, 32
; RV64ZBB-NEXT: add a1, a1, a4
; RV64ZBB-NEXT: slli a4, a2, 32
@@ -770,18 +770,18 @@ define i16 @test_bswap_bitreverse_i16(i16 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: srli a1, a0, 4
; RV64I-NEXT: lui a2, 1
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: lui a2, 3
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slli a0, a0, 4
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 5
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: slli a0, a0, 2
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 1
@@ -820,18 +820,18 @@ define i16 @test_bswap_bitreverse_i16(i16 %a) nounwind {
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: srli a1, a0, 4
; RV64ZBB-NEXT: lui a2, 1
-; RV64ZBB-NEXT: addiw a2, a2, -241
+; RV64ZBB-NEXT: addi a2, a2, -241
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: lui a2, 3
-; RV64ZBB-NEXT: addiw a2, a2, 819
+; RV64ZBB-NEXT: addi a2, a2, 819
; RV64ZBB-NEXT: slli a0, a0, 4
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: lui a2, 5
-; RV64ZBB-NEXT: addiw a2, a2, 1365
+; RV64ZBB-NEXT: addi a2, a2, 1365
; RV64ZBB-NEXT: slli a0, a0, 2
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 1
@@ -885,18 +885,18 @@ define i32 @test_bswap_bitreverse_i32(i32 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: srli a1, a0, 4
; RV64I-NEXT: lui a2, 61681
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slliw a0, a0, 4
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 349525
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: slliw a0, a0, 2
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 1
@@ -935,18 +935,18 @@ define i32 @test_bswap_bitreverse_i32(i32 %a) nounwind {
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: srli a1, a0, 4
; RV64ZBB-NEXT: lui a2, 61681
-; RV64ZBB-NEXT: addiw a2, a2, -241
+; RV64ZBB-NEXT: addi a2, a2, -241
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: lui a2, 209715
-; RV64ZBB-NEXT: addiw a2, a2, 819
+; RV64ZBB-NEXT: addi a2, a2, 819
; RV64ZBB-NEXT: slliw a0, a0, 4
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: lui a2, 349525
-; RV64ZBB-NEXT: addiw a2, a2, 1365
+; RV64ZBB-NEXT: addi a2, a2, 1365
; RV64ZBB-NEXT: slliw a0, a0, 2
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 1
@@ -1016,9 +1016,9 @@ define i64 @test_bswap_bitreverse_i64(i64 %a) nounwind {
; RV64I-NEXT: lui a1, 61681
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 349525
-; RV64I-NEXT: addiw a1, a1, -241
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, 1365
+; RV64I-NEXT: addi a1, a1, -241
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, 1365
; RV64I-NEXT: slli a4, a1, 32
; RV64I-NEXT: add a1, a1, a4
; RV64I-NEXT: slli a4, a2, 32
@@ -1087,9 +1087,9 @@ define i64 @test_bswap_bitreverse_i64(i64 %a) nounwind {
; RV64ZBB-NEXT: lui a1, 61681
; RV64ZBB-NEXT: lui a2, 209715
; RV64ZBB-NEXT: lui a3, 349525
-; RV64ZBB-NEXT: addiw a1, a1, -241
-; RV64ZBB-NEXT: addiw a2, a2, 819
-; RV64ZBB-NEXT: addiw a3, a3, 1365
+; RV64ZBB-NEXT: addi a1, a1, -241
+; RV64ZBB-NEXT: addi a2, a2, 819
+; RV64ZBB-NEXT: addi a3, a3, 1365
; RV64ZBB-NEXT: slli a4, a1, 32
; RV64ZBB-NEXT: add a1, a1, a4
; RV64ZBB-NEXT: slli a4, a2, 32
@@ -1158,18 +1158,18 @@ define i16 @test_bitreverse_bswap_i16(i16 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: srli a1, a0, 4
; RV64I-NEXT: lui a2, 1
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: lui a2, 3
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slli a0, a0, 4
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 5
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: slli a0, a0, 2
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 1
@@ -1208,18 +1208,18 @@ define i16 @test_bitreverse_bswap_i16(i16 %a) nounwind {
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: srli a1, a0, 4
; RV64ZBB-NEXT: lui a2, 1
-; RV64ZBB-NEXT: addiw a2, a2, -241
+; RV64ZBB-NEXT: addi a2, a2, -241
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: lui a2, 3
-; RV64ZBB-NEXT: addiw a2, a2, 819
+; RV64ZBB-NEXT: addi a2, a2, 819
; RV64ZBB-NEXT: slli a0, a0, 4
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: lui a2, 5
-; RV64ZBB-NEXT: addiw a2, a2, 1365
+; RV64ZBB-NEXT: addi a2, a2, 1365
; RV64ZBB-NEXT: slli a0, a0, 2
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 1
@@ -1273,18 +1273,18 @@ define i32 @test_bitreverse_bswap_i32(i32 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: srli a1, a0, 4
; RV64I-NEXT: lui a2, 61681
-; RV64I-NEXT: addiw a2, a2, -241
+; RV64I-NEXT: addi a2, a2, -241
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slliw a0, a0, 4
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 349525
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: slliw a0, a0, 2
; RV64I-NEXT: or a0, a1, a0
; RV64I-NEXT: srli a1, a0, 1
@@ -1323,18 +1323,18 @@ define i32 @test_bitreverse_bswap_i32(i32 %a) nounwind {
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: srli a1, a0, 4
; RV64ZBB-NEXT: lui a2, 61681
-; RV64ZBB-NEXT: addiw a2, a2, -241
+; RV64ZBB-NEXT: addi a2, a2, -241
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: lui a2, 209715
-; RV64ZBB-NEXT: addiw a2, a2, 819
+; RV64ZBB-NEXT: addi a2, a2, 819
; RV64ZBB-NEXT: slliw a0, a0, 4
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 2
; RV64ZBB-NEXT: and a0, a0, a2
; RV64ZBB-NEXT: and a1, a1, a2
; RV64ZBB-NEXT: lui a2, 349525
-; RV64ZBB-NEXT: addiw a2, a2, 1365
+; RV64ZBB-NEXT: addi a2, a2, 1365
; RV64ZBB-NEXT: slliw a0, a0, 2
; RV64ZBB-NEXT: or a0, a1, a0
; RV64ZBB-NEXT: srli a1, a0, 1
@@ -1404,9 +1404,9 @@ define i64 @test_bitreverse_bswap_i64(i64 %a) nounwind {
; RV64I-NEXT: lui a1, 61681
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 349525
-; RV64I-NEXT: addiw a1, a1, -241
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, 1365
+; RV64I-NEXT: addi a1, a1, -241
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, 1365
; RV64I-NEXT: slli a4, a1, 32
; RV64I-NEXT: add a1, a1, a4
; RV64I-NEXT: slli a4, a2, 32
@@ -1475,9 +1475,9 @@ define i64 @test_bitreverse_bswap_i64(i64 %a) nounwind {
; RV64ZBB-NEXT: lui a1, 61681
; RV64ZBB-NEXT: lui a2, 209715
; RV64ZBB-NEXT: lui a3, 349525
-; RV64ZBB-NEXT: addiw a1, a1, -241
-; RV64ZBB-NEXT: addiw a2, a2, 819
-; RV64ZBB-NEXT: addiw a3, a3, 1365
+; RV64ZBB-NEXT: addi a1, a1, -241
+; RV64ZBB-NEXT: addi a2, a2, 819
+; RV64ZBB-NEXT: addi a3, a3, 1365
; RV64ZBB-NEXT: slli a4, a1, 32
; RV64ZBB-NEXT: add a1, a1, a4
; RV64ZBB-NEXT: slli a4, a2, 32
diff --git a/llvm/test/CodeGen/RISCV/calling-conv-half.ll b/llvm/test/CodeGen/RISCV/calling-conv-half.ll
index 541c9b4d40c7e..008036b2d2e20 100644
--- a/llvm/test/CodeGen/RISCV/calling-conv-half.ll
+++ b/llvm/test/CodeGen/RISCV/calling-conv-half.ll
@@ -361,7 +361,7 @@ define i32 @caller_half_on_stack() nounwind {
; RV64I-NEXT: li a4, 5
; RV64I-NEXT: li a5, 6
; RV64I-NEXT: li a6, 7
-; RV64I-NEXT: addiw t0, a7, -1792
+; RV64I-NEXT: addi t0, a7, -1792
; RV64I-NEXT: li a7, 8
; RV64I-NEXT: sd t0, 0(sp)
; RV64I-NEXT: call callee_half_on_stack
@@ -510,7 +510,7 @@ define half @callee_half_ret() nounwind {
; RV64IF-LABEL: callee_half_ret:
; RV64IF: # %bb.0:
; RV64IF-NEXT: lui a0, 1048564
-; RV64IF-NEXT: addiw a0, a0, -1024
+; RV64IF-NEXT: addi a0, a0, -1024
; RV64IF-NEXT: ret
;
; RV32-ILP32F-LABEL: callee_half_ret:
diff --git a/llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-lp64d-common.ll b/llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-lp64d-common.ll
index e0f46e7484518..a63dc0ef3a3a7 100644
--- a/llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-lp64d-common.ll
+++ b/llvm/test/CodeGen/RISCV/calling-conv-lp64-lp64f-lp64d-common.ll
@@ -450,7 +450,7 @@ define i256 @callee_large_scalar_ret() nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: lui a2, 1018435
-; RV64I-NEXT: addiw a2, a2, 747
+; RV64I-NEXT: addi a2, a2, 747
; RV64I-NEXT: sd a2, 0(a0)
; RV64I-NEXT: sd a1, 8(a0)
; RV64I-NEXT: sd a1, 16(a0)
diff --git a/llvm/test/CodeGen/RISCV/codemodel-lowering.ll b/llvm/test/CodeGen/RISCV/codemodel-lowering.ll
index 4831f0b24c7fe..086c3ac181521 100644
--- a/llvm/test/CodeGen/RISCV/codemodel-lowering.ll
+++ b/llvm/test/CodeGen/RISCV/codemodel-lowering.ll
@@ -303,21 +303,21 @@ define float @lower_constantpool(float %a) nounwind {
; RV64FINX-SMALL-LABEL: lower_constantpool:
; RV64FINX-SMALL: # %bb.0:
; RV64FINX-SMALL-NEXT: lui a1, 260097
-; RV64FINX-SMALL-NEXT: addiw a1, a1, -2048
+; RV64FINX-SMALL-NEXT: addi a1, a1, -2048
; RV64FINX-SMALL-NEXT: fadd.s a0, a0, a1
; RV64FINX-SMALL-NEXT: ret
;
; RV64FINX-MEDIUM-LABEL: lower_constantpool:
; RV64FINX-MEDIUM: # %bb.0:
; RV64FINX-MEDIUM-NEXT: lui a1, 260097
-; RV64FINX-MEDIUM-NEXT: addiw a1, a1, -2048
+; RV64FINX-MEDIUM-NEXT: addi a1, a1, -2048
; RV64FINX-MEDIUM-NEXT: fadd.s a0, a0, a1
; RV64FINX-MEDIUM-NEXT: ret
;
; RV64FINX-LARGE-LABEL: lower_constantpool:
; RV64FINX-LARGE: # %bb.0:
; RV64FINX-LARGE-NEXT: lui a1, 260097
-; RV64FINX-LARGE-NEXT: addiw a1, a1, -2048
+; RV64FINX-LARGE-NEXT: addi a1, a1, -2048
; RV64FINX-LARGE-NEXT: fadd.s a0, a0, a1
; RV64FINX-LARGE-NEXT: ret
%1 = fadd float %a, 1.000244140625
diff --git a/llvm/test/CodeGen/RISCV/ctlz-cttz-ctpop.ll b/llvm/test/CodeGen/RISCV/ctlz-cttz-ctpop.ll
index 8b9d602dcde83..724891853630a 100644
--- a/llvm/test/CodeGen/RISCV/ctlz-cttz-ctpop.ll
+++ b/llvm/test/CodeGen/RISCV/ctlz-cttz-ctpop.ll
@@ -163,11 +163,11 @@ define i16 @test_cttz_i16(i16 %a) nounwind {
; RV64NOZBB-NEXT: not a0, a0
; RV64NOZBB-NEXT: lui a2, 5
; RV64NOZBB-NEXT: and a0, a0, a1
-; RV64NOZBB-NEXT: addiw a1, a2, 1365
+; RV64NOZBB-NEXT: addi a1, a2, 1365
; RV64NOZBB-NEXT: srli a2, a0, 1
; RV64NOZBB-NEXT: and a1, a2, a1
; RV64NOZBB-NEXT: lui a2, 3
-; RV64NOZBB-NEXT: addiw a2, a2, 819
+; RV64NOZBB-NEXT: addi a2, a2, 819
; RV64NOZBB-NEXT: sub a0, a0, a1
; RV64NOZBB-NEXT: and a1, a0, a2
; RV64NOZBB-NEXT: srli a0, a0, 2
@@ -671,11 +671,11 @@ define i16 @test_cttz_i16_zero_undef(i16 %a) nounwind {
; RV64NOZBB-NEXT: not a0, a0
; RV64NOZBB-NEXT: lui a2, 5
; RV64NOZBB-NEXT: and a0, a0, a1
-; RV64NOZBB-NEXT: addiw a1, a2, 1365
+; RV64NOZBB-NEXT: addi a1, a2, 1365
; RV64NOZBB-NEXT: srli a2, a0, 1
; RV64NOZBB-NEXT: and a1, a2, a1
; RV64NOZBB-NEXT: lui a2, 3
-; RV64NOZBB-NEXT: addiw a2, a2, 819
+; RV64NOZBB-NEXT: addi a2, a2, 819
; RV64NOZBB-NEXT: sub a0, a0, a1
; RV64NOZBB-NEXT: and a1, a0, a2
; RV64NOZBB-NEXT: srli a0, a0, 2
@@ -1131,7 +1131,7 @@ define i16 @test_ctlz_i16(i16 %a) nounwind {
; RV64NOZBB-NEXT: srli a1, a1, 49
; RV64NOZBB-NEXT: lui a2, 5
; RV64NOZBB-NEXT: or a0, a0, a1
-; RV64NOZBB-NEXT: addiw a1, a2, 1365
+; RV64NOZBB-NEXT: addi a1, a2, 1365
; RV64NOZBB-NEXT: slli a2, a0, 48
; RV64NOZBB-NEXT: srli a2, a2, 50
; RV64NOZBB-NEXT: or a0, a0, a2
@@ -1145,7 +1145,7 @@ define i16 @test_ctlz_i16(i16 %a) nounwind {
; RV64NOZBB-NEXT: srli a2, a0, 1
; RV64NOZBB-NEXT: and a1, a2, a1
; RV64NOZBB-NEXT: lui a2, 3
-; RV64NOZBB-NEXT: addiw a2, a2, 819
+; RV64NOZBB-NEXT: addi a2, a2, 819
; RV64NOZBB-NEXT: sub a0, a0, a1
; RV64NOZBB-NEXT: and a1, a0, a2
; RV64NOZBB-NEXT: srli a0, a0, 2
@@ -1243,7 +1243,7 @@ define i32 @test_ctlz_i32(i32 %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -1256,7 +1256,7 @@ define i32 @test_ctlz_i32(i32 %a) nounwind {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -1325,7 +1325,7 @@ define i32 @test_ctlz_i32(i32 %a) nounwind {
; RV64M-NEXT: srliw a1, a0, 1
; RV64M-NEXT: lui a2, 349525
; RV64M-NEXT: or a0, a0, a1
-; RV64M-NEXT: addiw a1, a2, 1365
+; RV64M-NEXT: addi a1, a2, 1365
; RV64M-NEXT: srliw a2, a0, 2
; RV64M-NEXT: or a0, a0, a2
; RV64M-NEXT: srliw a2, a0, 4
@@ -1338,7 +1338,7 @@ define i32 @test_ctlz_i32(i32 %a) nounwind {
; RV64M-NEXT: srli a2, a0, 1
; RV64M-NEXT: and a1, a2, a1
; RV64M-NEXT: lui a2, 209715
-; RV64M-NEXT: addiw a2, a2, 819
+; RV64M-NEXT: addi a2, a2, 819
; RV64M-NEXT: sub a0, a0, a1
; RV64M-NEXT: and a1, a0, a2
; RV64M-NEXT: srli a0, a0, 2
@@ -1468,8 +1468,8 @@ define i64 @test_ctlz_i64(i64 %a) nounwind {
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: lui a3, 209715
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
-; RV64I-NEXT: addiw a2, a3, 819
+; RV64I-NEXT: addi a1, a2, 1365
+; RV64I-NEXT: addi a2, a3, 819
; RV64I-NEXT: srli a3, a0, 2
; RV64I-NEXT: or a0, a0, a3
; RV64I-NEXT: slli a3, a1, 32
@@ -1488,7 +1488,7 @@ define i64 @test_ctlz_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -1592,9 +1592,9 @@ define i64 @test_ctlz_i64(i64 %a) nounwind {
; RV64M-NEXT: lui a3, 209715
; RV64M-NEXT: lui a4, 61681
; RV64M-NEXT: or a0, a0, a1
-; RV64M-NEXT: addiw a1, a2, 1365
-; RV64M-NEXT: addiw a2, a3, 819
-; RV64M-NEXT: addiw a3, a4, -241
+; RV64M-NEXT: addi a1, a2, 1365
+; RV64M-NEXT: addi a2, a3, 819
+; RV64M-NEXT: addi a3, a4, -241
; RV64M-NEXT: srli a4, a0, 2
; RV64M-NEXT: or a0, a0, a4
; RV64M-NEXT: slli a4, a1, 32
@@ -1619,7 +1619,7 @@ define i64 @test_ctlz_i64(i64 %a) nounwind {
; RV64M-NEXT: srli a0, a0, 2
; RV64M-NEXT: and a0, a0, a2
; RV64M-NEXT: lui a2, 4112
-; RV64M-NEXT: addiw a2, a2, 257
+; RV64M-NEXT: addi a2, a2, 257
; RV64M-NEXT: add a0, a1, a0
; RV64M-NEXT: srli a1, a0, 4
; RV64M-NEXT: add a0, a0, a1
@@ -1788,7 +1788,7 @@ define i16 @test_ctlz_i16_zero_undef(i16 %a) nounwind {
; RV64NOZBB-NEXT: slli a1, a0, 48
; RV64NOZBB-NEXT: lui a2, 5
; RV64NOZBB-NEXT: srli a1, a1, 49
-; RV64NOZBB-NEXT: addiw a2, a2, 1365
+; RV64NOZBB-NEXT: addi a2, a2, 1365
; RV64NOZBB-NEXT: or a0, a0, a1
; RV64NOZBB-NEXT: slli a1, a0, 48
; RV64NOZBB-NEXT: srli a1, a1, 50
@@ -1803,7 +1803,7 @@ define i16 @test_ctlz_i16_zero_undef(i16 %a) nounwind {
; RV64NOZBB-NEXT: srli a1, a0, 1
; RV64NOZBB-NEXT: and a1, a1, a2
; RV64NOZBB-NEXT: lui a2, 3
-; RV64NOZBB-NEXT: addiw a2, a2, 819
+; RV64NOZBB-NEXT: addi a2, a2, 819
; RV64NOZBB-NEXT: sub a0, a0, a1
; RV64NOZBB-NEXT: and a1, a0, a2
; RV64NOZBB-NEXT: srli a0, a0, 2
@@ -1886,7 +1886,7 @@ define i32 @test_ctlz_i32_zero_undef(i32 %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -1899,7 +1899,7 @@ define i32 @test_ctlz_i32_zero_undef(i32 %a) nounwind {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -1957,7 +1957,7 @@ define i32 @test_ctlz_i32_zero_undef(i32 %a) nounwind {
; RV64M-NEXT: srliw a1, a0, 1
; RV64M-NEXT: lui a2, 349525
; RV64M-NEXT: or a0, a0, a1
-; RV64M-NEXT: addiw a1, a2, 1365
+; RV64M-NEXT: addi a1, a2, 1365
; RV64M-NEXT: srliw a2, a0, 2
; RV64M-NEXT: or a0, a0, a2
; RV64M-NEXT: srliw a2, a0, 4
@@ -1970,7 +1970,7 @@ define i32 @test_ctlz_i32_zero_undef(i32 %a) nounwind {
; RV64M-NEXT: srli a2, a0, 1
; RV64M-NEXT: and a1, a2, a1
; RV64M-NEXT: lui a2, 209715
-; RV64M-NEXT: addiw a2, a2, 819
+; RV64M-NEXT: addi a2, a2, 819
; RV64M-NEXT: sub a0, a0, a1
; RV64M-NEXT: and a1, a0, a2
; RV64M-NEXT: srli a0, a0, 2
@@ -2088,8 +2088,8 @@ define i64 @test_ctlz_i64_zero_undef(i64 %a) nounwind {
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: lui a3, 209715
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
-; RV64I-NEXT: addiw a2, a3, 819
+; RV64I-NEXT: addi a1, a2, 1365
+; RV64I-NEXT: addi a2, a3, 819
; RV64I-NEXT: srli a3, a0, 2
; RV64I-NEXT: or a0, a0, a3
; RV64I-NEXT: slli a3, a1, 32
@@ -2108,7 +2108,7 @@ define i64 @test_ctlz_i64_zero_undef(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -2200,9 +2200,9 @@ define i64 @test_ctlz_i64_zero_undef(i64 %a) nounwind {
; RV64M-NEXT: lui a3, 209715
; RV64M-NEXT: lui a4, 61681
; RV64M-NEXT: or a0, a0, a1
-; RV64M-NEXT: addiw a1, a2, 1365
-; RV64M-NEXT: addiw a2, a3, 819
-; RV64M-NEXT: addiw a3, a4, -241
+; RV64M-NEXT: addi a1, a2, 1365
+; RV64M-NEXT: addi a2, a3, 819
+; RV64M-NEXT: addi a3, a4, -241
; RV64M-NEXT: srli a4, a0, 2
; RV64M-NEXT: or a0, a0, a4
; RV64M-NEXT: slli a4, a1, 32
@@ -2227,7 +2227,7 @@ define i64 @test_ctlz_i64_zero_undef(i64 %a) nounwind {
; RV64M-NEXT: srli a0, a0, 2
; RV64M-NEXT: and a0, a0, a2
; RV64M-NEXT: lui a2, 4112
-; RV64M-NEXT: addiw a2, a2, 257
+; RV64M-NEXT: addi a2, a2, 257
; RV64M-NEXT: add a0, a1, a0
; RV64M-NEXT: srli a1, a0, 4
; RV64M-NEXT: add a0, a0, a1
@@ -2375,10 +2375,10 @@ define i16 @test_ctpop_i16(i16 %a) nounwind {
; RV64NOZBB: # %bb.0:
; RV64NOZBB-NEXT: srli a1, a0, 1
; RV64NOZBB-NEXT: lui a2, 5
-; RV64NOZBB-NEXT: addiw a2, a2, 1365
+; RV64NOZBB-NEXT: addi a2, a2, 1365
; RV64NOZBB-NEXT: and a1, a1, a2
; RV64NOZBB-NEXT: lui a2, 3
-; RV64NOZBB-NEXT: addiw a2, a2, 819
+; RV64NOZBB-NEXT: addi a2, a2, 819
; RV64NOZBB-NEXT: sub a0, a0, a1
; RV64NOZBB-NEXT: and a1, a0, a2
; RV64NOZBB-NEXT: srli a0, a0, 2
@@ -2428,10 +2428,10 @@ define i16 @test_ctpop_i16(i16 %a) nounwind {
; RV64XTHEADBB: # %bb.0:
; RV64XTHEADBB-NEXT: srli a1, a0, 1
; RV64XTHEADBB-NEXT: lui a2, 5
-; RV64XTHEADBB-NEXT: addiw a2, a2, 1365
+; RV64XTHEADBB-NEXT: addi a2, a2, 1365
; RV64XTHEADBB-NEXT: and a1, a1, a2
; RV64XTHEADBB-NEXT: lui a2, 3
-; RV64XTHEADBB-NEXT: addiw a2, a2, 819
+; RV64XTHEADBB-NEXT: addi a2, a2, 819
; RV64XTHEADBB-NEXT: sub a0, a0, a1
; RV64XTHEADBB-NEXT: and a1, a0, a2
; RV64XTHEADBB-NEXT: srli a0, a0, 2
@@ -2477,10 +2477,10 @@ define i32 @test_ctpop_i32(i32 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: srli a1, a0, 1
; RV64I-NEXT: lui a2, 349525
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -2526,10 +2526,10 @@ define i32 @test_ctpop_i32(i32 %a) nounwind {
; RV64M: # %bb.0:
; RV64M-NEXT: srli a1, a0, 1
; RV64M-NEXT: lui a2, 349525
-; RV64M-NEXT: addiw a2, a2, 1365
+; RV64M-NEXT: addi a2, a2, 1365
; RV64M-NEXT: and a1, a1, a2
; RV64M-NEXT: lui a2, 209715
-; RV64M-NEXT: addiw a2, a2, 819
+; RV64M-NEXT: addi a2, a2, 819
; RV64M-NEXT: sub a0, a0, a1
; RV64M-NEXT: and a1, a0, a2
; RV64M-NEXT: srli a0, a0, 2
@@ -2585,10 +2585,10 @@ define i32 @test_ctpop_i32(i32 %a) nounwind {
; RV64XTHEADBB: # %bb.0:
; RV64XTHEADBB-NEXT: srli a1, a0, 1
; RV64XTHEADBB-NEXT: lui a2, 349525
-; RV64XTHEADBB-NEXT: addiw a2, a2, 1365
+; RV64XTHEADBB-NEXT: addi a2, a2, 1365
; RV64XTHEADBB-NEXT: and a1, a1, a2
; RV64XTHEADBB-NEXT: lui a2, 209715
-; RV64XTHEADBB-NEXT: addiw a2, a2, 819
+; RV64XTHEADBB-NEXT: addi a2, a2, 819
; RV64XTHEADBB-NEXT: sub a0, a0, a1
; RV64XTHEADBB-NEXT: and a1, a0, a2
; RV64XTHEADBB-NEXT: srli a0, a0, 2
@@ -2656,8 +2656,8 @@ define i64 @test_ctpop_i64(i64 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 349525
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slli a3, a1, 32
; RV64I-NEXT: add a1, a1, a3
; RV64I-NEXT: slli a3, a2, 32
@@ -2665,7 +2665,7 @@ define i64 @test_ctpop_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -2728,9 +2728,9 @@ define i64 @test_ctpop_i64(i64 %a) nounwind {
; RV64M-NEXT: lui a1, 349525
; RV64M-NEXT: lui a2, 209715
; RV64M-NEXT: lui a3, 61681
-; RV64M-NEXT: addiw a1, a1, 1365
-; RV64M-NEXT: addiw a2, a2, 819
-; RV64M-NEXT: addiw a3, a3, -241
+; RV64M-NEXT: addi a1, a1, 1365
+; RV64M-NEXT: addi a2, a2, 819
+; RV64M-NEXT: addi a3, a3, -241
; RV64M-NEXT: slli a4, a1, 32
; RV64M-NEXT: add a1, a1, a4
; RV64M-NEXT: slli a4, a2, 32
@@ -2744,7 +2744,7 @@ define i64 @test_ctpop_i64(i64 %a) nounwind {
; RV64M-NEXT: srli a0, a0, 2
; RV64M-NEXT: and a0, a0, a2
; RV64M-NEXT: lui a2, 4112
-; RV64M-NEXT: addiw a2, a2, 257
+; RV64M-NEXT: addi a2, a2, 257
; RV64M-NEXT: add a0, a1, a0
; RV64M-NEXT: srli a1, a0, 4
; RV64M-NEXT: add a0, a0, a1
@@ -2814,8 +2814,8 @@ define i64 @test_ctpop_i64(i64 %a) nounwind {
; RV64XTHEADBB: # %bb.0:
; RV64XTHEADBB-NEXT: lui a1, 349525
; RV64XTHEADBB-NEXT: lui a2, 209715
-; RV64XTHEADBB-NEXT: addiw a1, a1, 1365
-; RV64XTHEADBB-NEXT: addiw a2, a2, 819
+; RV64XTHEADBB-NEXT: addi a1, a1, 1365
+; RV64XTHEADBB-NEXT: addi a2, a2, 819
; RV64XTHEADBB-NEXT: slli a3, a1, 32
; RV64XTHEADBB-NEXT: add a1, a1, a3
; RV64XTHEADBB-NEXT: slli a3, a2, 32
@@ -2823,7 +2823,7 @@ define i64 @test_ctpop_i64(i64 %a) nounwind {
; RV64XTHEADBB-NEXT: srli a3, a0, 1
; RV64XTHEADBB-NEXT: and a1, a3, a1
; RV64XTHEADBB-NEXT: lui a3, 61681
-; RV64XTHEADBB-NEXT: addiw a3, a3, -241
+; RV64XTHEADBB-NEXT: addi a3, a3, -241
; RV64XTHEADBB-NEXT: sub a0, a0, a1
; RV64XTHEADBB-NEXT: and a1, a0, a2
; RV64XTHEADBB-NEXT: srli a0, a0, 2
diff --git a/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll b/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll
index 33907e10730a7..637fb314e9536 100644
--- a/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll
+++ b/llvm/test/CodeGen/RISCV/ctz_zero_return_test.ll
@@ -732,8 +732,8 @@ define signext i32 @ctlz(i64 %b) nounwind {
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: lui a3, 209715
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
-; RV64I-NEXT: addiw a2, a3, 819
+; RV64I-NEXT: addi a1, a2, 1365
+; RV64I-NEXT: addi a2, a3, 819
; RV64I-NEXT: srli a3, a0, 2
; RV64I-NEXT: or a0, a0, a3
; RV64I-NEXT: slli a3, a1, 32
@@ -752,7 +752,7 @@ define signext i32 @ctlz(i64 %b) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
diff --git a/llvm/test/CodeGen/RISCV/div-by-constant.ll b/llvm/test/CodeGen/RISCV/div-by-constant.ll
index e14a894bf1878..ea8b04d727acf 100644
--- a/llvm/test/CodeGen/RISCV/div-by-constant.ll
+++ b/llvm/test/CodeGen/RISCV/div-by-constant.ll
@@ -64,7 +64,7 @@ define i32 @udiv_constant_add(i32 %a) nounwind {
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: zext.w a1, a0
; RV64IMZB-NEXT: lui a2, 149797
-; RV64IMZB-NEXT: addiw a2, a2, -1755
+; RV64IMZB-NEXT: addi a2, a2, -1755
; RV64IMZB-NEXT: mul a1, a1, a2
; RV64IMZB-NEXT: srli a1, a1, 32
; RV64IMZB-NEXT: subw a0, a0, a1
@@ -104,7 +104,7 @@ define i64 @udiv64_constant_no_add(i64 %a) nounwind {
; RV64-LABEL: udiv64_constant_no_add:
; RV64: # %bb.0:
; RV64-NEXT: lui a1, 838861
-; RV64-NEXT: addiw a1, a1, -819
+; RV64-NEXT: addi a1, a1, -819
; RV64-NEXT: slli a2, a1, 32
; RV64-NEXT: add a1, a1, a2
; RV64-NEXT: mulhu a0, a0, a1
@@ -282,7 +282,7 @@ define i32 @sdiv_constant_no_srai(i32 %a) nounwind {
; RV64: # %bb.0:
; RV64-NEXT: sext.w a0, a0
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a1, a1, 1366
+; RV64-NEXT: addi a1, a1, 1366
; RV64-NEXT: mul a0, a0, a1
; RV64-NEXT: srli a1, a0, 63
; RV64-NEXT: srli a0, a0, 32
@@ -308,7 +308,7 @@ define i32 @sdiv_constant_srai(i32 %a) nounwind {
; RV64: # %bb.0:
; RV64-NEXT: sext.w a0, a0
; RV64-NEXT: lui a1, 419430
-; RV64-NEXT: addiw a1, a1, 1639
+; RV64-NEXT: addi a1, a1, 1639
; RV64-NEXT: mul a0, a0, a1
; RV64-NEXT: srli a1, a0, 63
; RV64-NEXT: srai a0, a0, 33
@@ -335,7 +335,7 @@ define i32 @sdiv_constant_add_srai(i32 %a) nounwind {
; RV64: # %bb.0:
; RV64-NEXT: sext.w a1, a0
; RV64-NEXT: lui a2, 599186
-; RV64-NEXT: addiw a2, a2, 1171
+; RV64-NEXT: addi a2, a2, 1171
; RV64-NEXT: mul a1, a1, a2
; RV64-NEXT: srli a1, a1, 32
; RV64-NEXT: add a0, a1, a0
@@ -364,7 +364,7 @@ define i32 @sdiv_constant_sub_srai(i32 %a) nounwind {
; RV64: # %bb.0:
; RV64-NEXT: sext.w a1, a0
; RV64-NEXT: lui a2, 449390
-; RV64-NEXT: addiw a2, a2, -1171
+; RV64-NEXT: addi a2, a2, -1171
; RV64-NEXT: mul a1, a1, a2
; RV64-NEXT: srli a1, a1, 32
; RV64-NEXT: subw a1, a1, a0
@@ -440,7 +440,7 @@ define i64 @sdiv64_constant_add_srai(i64 %a) nounwind {
; RV64-LABEL: sdiv64_constant_add_srai:
; RV64: # %bb.0:
; RV64-NEXT: lui a1, 559241
-; RV64-NEXT: addiw a1, a1, -1911
+; RV64-NEXT: addi a1, a1, -1911
; RV64-NEXT: slli a2, a1, 32
; RV64-NEXT: add a1, a1, a2
; RV64-NEXT: mulh a1, a0, a1
@@ -468,7 +468,7 @@ define i64 @sdiv64_constant_sub_srai(i64 %a) nounwind {
; RV64-LABEL: sdiv64_constant_sub_srai:
; RV64: # %bb.0:
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: slli a2, a1, 32
; RV64-NEXT: add a1, a1, a2
; RV64-NEXT: mulh a1, a0, a1
@@ -718,7 +718,7 @@ define i16 @sdiv16_constant_no_srai(i16 %a) nounwind {
; RV64IM-NEXT: slli a0, a0, 48
; RV64IM-NEXT: lui a1, 5
; RV64IM-NEXT: srai a0, a0, 48
-; RV64IM-NEXT: addiw a1, a1, 1366
+; RV64IM-NEXT: addi a1, a1, 1366
; RV64IM-NEXT: mul a0, a0, a1
; RV64IM-NEXT: srli a1, a0, 63
; RV64IM-NEXT: srli a0, a0, 16
@@ -729,7 +729,7 @@ define i16 @sdiv16_constant_no_srai(i16 %a) nounwind {
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: sext.h a0, a0
; RV64IMZB-NEXT: lui a1, 5
-; RV64IMZB-NEXT: addiw a1, a1, 1366
+; RV64IMZB-NEXT: addi a1, a1, 1366
; RV64IMZB-NEXT: mul a0, a0, a1
; RV64IMZB-NEXT: srli a1, a0, 63
; RV64IMZB-NEXT: srli a0, a0, 16
@@ -768,7 +768,7 @@ define i16 @sdiv16_constant_srai(i16 %a) nounwind {
; RV64IM-NEXT: slli a0, a0, 48
; RV64IM-NEXT: lui a1, 6
; RV64IM-NEXT: srai a0, a0, 48
-; RV64IM-NEXT: addiw a1, a1, 1639
+; RV64IM-NEXT: addi a1, a1, 1639
; RV64IM-NEXT: mul a0, a0, a1
; RV64IM-NEXT: srli a1, a0, 63
; RV64IM-NEXT: srai a0, a0, 17
@@ -779,7 +779,7 @@ define i16 @sdiv16_constant_srai(i16 %a) nounwind {
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: sext.h a0, a0
; RV64IMZB-NEXT: lui a1, 6
-; RV64IMZB-NEXT: addiw a1, a1, 1639
+; RV64IMZB-NEXT: addi a1, a1, 1639
; RV64IMZB-NEXT: mul a0, a0, a1
; RV64IMZB-NEXT: srli a1, a0, 63
; RV64IMZB-NEXT: srai a0, a0, 17
@@ -824,7 +824,7 @@ define i16 @sdiv16_constant_add_srai(i16 %a) nounwind {
; RV64IM-NEXT: slli a1, a0, 48
; RV64IM-NEXT: lui a2, 1048569
; RV64IM-NEXT: srai a1, a1, 48
-; RV64IM-NEXT: addiw a2, a2, -1911
+; RV64IM-NEXT: addi a2, a2, -1911
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a1, a1, 16
; RV64IM-NEXT: add a0, a1, a0
@@ -838,7 +838,7 @@ define i16 @sdiv16_constant_add_srai(i16 %a) nounwind {
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: sext.h a1, a0
; RV64IMZB-NEXT: lui a2, 1048569
-; RV64IMZB-NEXT: addiw a2, a2, -1911
+; RV64IMZB-NEXT: addi a2, a2, -1911
; RV64IMZB-NEXT: mul a1, a1, a2
; RV64IMZB-NEXT: srli a1, a1, 16
; RV64IMZB-NEXT: add a0, a1, a0
@@ -886,7 +886,7 @@ define i16 @sdiv16_constant_sub_srai(i16 %a) nounwind {
; RV64IM-NEXT: slli a1, a0, 48
; RV64IM-NEXT: lui a2, 7
; RV64IM-NEXT: srai a1, a1, 48
-; RV64IM-NEXT: addiw a2, a2, 1911
+; RV64IM-NEXT: addi a2, a2, 1911
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a1, a1, 16
; RV64IM-NEXT: subw a1, a1, a0
@@ -900,7 +900,7 @@ define i16 @sdiv16_constant_sub_srai(i16 %a) nounwind {
; RV64IMZB: # %bb.0:
; RV64IMZB-NEXT: sext.h a1, a0
; RV64IMZB-NEXT: lui a2, 7
-; RV64IMZB-NEXT: addiw a2, a2, 1911
+; RV64IMZB-NEXT: addi a2, a2, 1911
; RV64IMZB-NEXT: mul a1, a1, a2
; RV64IMZB-NEXT: srli a1, a1, 16
; RV64IMZB-NEXT: subw a1, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/div.ll b/llvm/test/CodeGen/RISCV/div.ll
index 415848449e13f..b160a532770db 100644
--- a/llvm/test/CodeGen/RISCV/div.ll
+++ b/llvm/test/CodeGen/RISCV/div.ll
@@ -211,7 +211,7 @@ define i64 @udiv64_constant(i64 %a) nounwind {
; RV64IM-LABEL: udiv64_constant:
; RV64IM: # %bb.0:
; RV64IM-NEXT: lui a1, 838861
-; RV64IM-NEXT: addiw a1, a1, -819
+; RV64IM-NEXT: addi a1, a1, -819
; RV64IM-NEXT: slli a2, a1, 32
; RV64IM-NEXT: add a1, a1, a2
; RV64IM-NEXT: mulhu a0, a0, a1
@@ -441,7 +441,7 @@ define i16 @udiv16(i16 %a, i16 %b) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: call __udivdi3
@@ -638,7 +638,7 @@ define i32 @sdiv_constant(i32 %a) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: sext.w a0, a0
; RV64IM-NEXT: lui a1, 419430
-; RV64IM-NEXT: addiw a1, a1, 1639
+; RV64IM-NEXT: addi a1, a1, 1639
; RV64IM-NEXT: mul a0, a0, a1
; RV64IM-NEXT: srli a1, a0, 63
; RV64IM-NEXT: srai a0, a0, 33
@@ -1189,7 +1189,7 @@ define i16 @sdiv16_constant(i16 %a) nounwind {
; RV64IM-NEXT: slli a0, a0, 48
; RV64IM-NEXT: lui a1, 6
; RV64IM-NEXT: srai a0, a0, 48
-; RV64IM-NEXT: addiw a1, a1, 1639
+; RV64IM-NEXT: addi a1, a1, 1639
; RV64IM-NEXT: mul a0, a0, a1
; RV64IM-NEXT: srli a1, a0, 63
; RV64IM-NEXT: srai a0, a0, 17
diff --git a/llvm/test/CodeGen/RISCV/double-convert.ll b/llvm/test/CodeGen/RISCV/double-convert.ll
index 0716650374d0d..3df3d17bd38eb 100644
--- a/llvm/test/CodeGen/RISCV/double-convert.ll
+++ b/llvm/test/CodeGen/RISCV/double-convert.ll
@@ -1939,7 +1939,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(double %a) nounwind {
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtdf2
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: blez a0, .LBB28_2
; RV64I-NEXT: # %bb.1: # %start
; RV64I-NEXT: mv a0, a1
diff --git a/llvm/test/CodeGen/RISCV/float-convert.ll b/llvm/test/CodeGen/RISCV/float-convert.ll
index 3fc2b6598b69b..797d59dfcdc86 100644
--- a/llvm/test/CodeGen/RISCV/float-convert.ll
+++ b/llvm/test/CodeGen/RISCV/float-convert.ll
@@ -126,7 +126,7 @@ define i32 @fcvt_w_s_sat(float %a) nounwind {
; RV64I-NEXT: lui s1, 524288
; RV64I-NEXT: .LBB1_2: # %start
; RV64I-NEXT: lui a1, 323584
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB1_4
@@ -311,7 +311,7 @@ define i32 @fcvt_wu_s_sat(float %a) nounwind {
; RV64I-NEXT: call __fixunssfdi
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a1, 325632
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB4_2
@@ -788,7 +788,7 @@ define i64 @fcvt_l_s_sat(float %a) nounwind {
; RV64I-NEXT: slli s1, s3, 63
; RV64I-NEXT: .LBB12_2: # %start
; RV64I-NEXT: lui a1, 389120
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB12_4
@@ -976,7 +976,7 @@ define i64 @fcvt_lu_s_sat(float %a) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a1, 391168
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: sgtz a0, a0
; RV64I-NEXT: neg s1, a0
@@ -1462,7 +1462,7 @@ define signext i16 @fcvt_w_s_sat_i16(float %a) nounwind {
; RV64IZFINX-NEXT: fmax.s a0, a0, a2
; RV64IZFINX-NEXT: lui a2, 290816
; RV64IZFINX-NEXT: neg a1, a1
-; RV64IZFINX-NEXT: addiw a2, a2, -512
+; RV64IZFINX-NEXT: addi a2, a2, -512
; RV64IZFINX-NEXT: fmin.s a0, a0, a2
; RV64IZFINX-NEXT: fcvt.l.s a0, a0, rtz
; RV64IZFINX-NEXT: and a0, a1, a0
@@ -1529,7 +1529,7 @@ define signext i16 @fcvt_w_s_sat_i16(float %a) nounwind {
; RV64I-NEXT: lui s1, 1048568
; RV64I-NEXT: .LBB24_2: # %start
; RV64I-NEXT: lui a0, 290816
-; RV64I-NEXT: addiw a1, a0, -512
+; RV64I-NEXT: addi a1, a0, -512
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB24_4
@@ -1633,7 +1633,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(float %a) nounwind {
; RV64IZFINX: # %bb.0: # %start
; RV64IZFINX-NEXT: fmax.s a0, a0, zero
; RV64IZFINX-NEXT: lui a1, 292864
-; RV64IZFINX-NEXT: addiw a1, a1, -256
+; RV64IZFINX-NEXT: addi a1, a1, -256
; RV64IZFINX-NEXT: fmin.s a0, a0, a1
; RV64IZFINX-NEXT: fcvt.lu.s a0, a0, rtz
; RV64IZFINX-NEXT: ret
@@ -1690,11 +1690,11 @@ define zeroext i16 @fcvt_wu_s_sat_i16(float %a) nounwind {
; RV64I-NEXT: call __fixunssfdi
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a0, 292864
-; RV64I-NEXT: addiw a1, a0, -256
+; RV64I-NEXT: addi a1, a0, -256
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: blez a0, .LBB26_2
; RV64I-NEXT: # %bb.1: # %start
; RV64I-NEXT: mv a0, a1
@@ -2132,7 +2132,7 @@ define zeroext i32 @fcvt_wu_s_sat_zext(float %a) nounwind {
; RV64I-NEXT: call __fixunssfdi
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a1, 325632
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB31_2
@@ -2240,7 +2240,7 @@ define signext i32 @fcvt_w_s_sat_sext(float %a) nounwind {
; RV64I-NEXT: lui s1, 524288
; RV64I-NEXT: .LBB32_2: # %start
; RV64I-NEXT: lui a1, 323584
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB32_4
diff --git a/llvm/test/CodeGen/RISCV/float-imm.ll b/llvm/test/CodeGen/RISCV/float-imm.ll
index 58cbc72e2197c..a010ab49b2827 100644
--- a/llvm/test/CodeGen/RISCV/float-imm.ll
+++ b/llvm/test/CodeGen/RISCV/float-imm.ll
@@ -16,19 +16,12 @@ define float @float_imm() nounwind {
; CHECK-NEXT: flw fa0, %lo(.LCPI0_0)(a0)
; CHECK-NEXT: ret
;
-; RV32ZFINX-LABEL: float_imm:
-; RV32ZFINX: # %bb.0:
-; RV32ZFINX-NEXT: lui a0, 263313
-; RV32ZFINX-NEXT: addi a0, a0, -37
-; RV32ZFINX-NEXT: # kill: def $x10_w killed $x10_w killed $x10
-; RV32ZFINX-NEXT: ret
-;
-; RV64ZFINX-LABEL: float_imm:
-; RV64ZFINX: # %bb.0:
-; RV64ZFINX-NEXT: lui a0, 263313
-; RV64ZFINX-NEXT: addiw a0, a0, -37
-; RV64ZFINX-NEXT: # kill: def $x10_w killed $x10_w killed $x10
-; RV64ZFINX-NEXT: ret
+; CHECKZFINX-LABEL: float_imm:
+; CHECKZFINX: # %bb.0:
+; CHECKZFINX-NEXT: lui a0, 263313
+; CHECKZFINX-NEXT: addi a0, a0, -37
+; CHECKZFINX-NEXT: # kill: def $x10_w killed $x10_w killed $x10
+; CHECKZFINX-NEXT: ret
ret float 3.14159274101257324218750
}
@@ -75,3 +68,6 @@ define float @float_negative_zero(ptr %pf) nounwind {
; CHECKZFINX-NEXT: ret
ret float -0.0
}
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; RV32ZFINX: {{.*}}
+; RV64ZFINX: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/float-intrinsics.ll b/llvm/test/CodeGen/RISCV/float-intrinsics.ll
index eb81e5b0eb809..be9ddc68ce667 100644
--- a/llvm/test/CodeGen/RISCV/float-intrinsics.ll
+++ b/llvm/test/CodeGen/RISCV/float-intrinsics.ll
@@ -1655,7 +1655,7 @@ define i1 @fpclass(float %x) {
; RV64I-NEXT: lui a3, 522240
; RV64I-NEXT: lui a4, 1046528
; RV64I-NEXT: srli a0, a0, 33
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: slti a1, a1, 0
; RV64I-NEXT: addi a5, a0, -1
; RV64I-NEXT: sltu a2, a5, a2
@@ -1764,7 +1764,7 @@ define i1 @isqnan_fpclass(float %x) {
; RV64I-NEXT: slli a0, a0, 33
; RV64I-NEXT: lui a1, 523264
; RV64I-NEXT: srli a0, a0, 33
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: slt a0, a1, a0
; RV64I-NEXT: ret
%1 = call i1 @llvm.is.fpclass.f32(float %x, i32 2) ; qnan
@@ -2153,7 +2153,7 @@ define i1 @isnotfinite_fpclass(float %x) {
; RV64I-NEXT: slli a0, a0, 33
; RV64I-NEXT: lui a1, 522240
; RV64I-NEXT: srli a0, a0, 33
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: slt a0, a1, a0
; RV64I-NEXT: ret
%1 = call i1 @llvm.is.fpclass.f32(float %x, i32 519) ; ox207 = "inf|nan"
diff --git a/llvm/test/CodeGen/RISCV/fold-addi-loadstore.ll b/llvm/test/CodeGen/RISCV/fold-addi-loadstore.ll
index 291c8c3d7c90f..477a7d1ce7b6b 100644
--- a/llvm/test/CodeGen/RISCV/fold-addi-loadstore.ll
+++ b/llvm/test/CodeGen/RISCV/fold-addi-loadstore.ll
@@ -1263,7 +1263,7 @@ define i1 @pr134525() nounwind {
; RV64I-NEXT: lui a0, %hi(ki_end+2145386496)
; RV64I-NEXT: addi a0, a0, %lo(ki_end+2145386496)
; RV64I-NEXT: lui a1, 32
-; RV64I-NEXT: addiw a1, a1, 1
+; RV64I-NEXT: addi a1, a1, 1
; RV64I-NEXT: sltu a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1275,7 +1275,7 @@ define i1 @pr134525() nounwind {
; RV64I-MEDIUM-NEXT: addi a0, a0, %pcrel_lo(.Lpcrel_hi15)
; RV64I-MEDIUM-NEXT: add a0, a0, a1
; RV64I-MEDIUM-NEXT: lui a1, 32
-; RV64I-MEDIUM-NEXT: addiw a1, a1, 1
+; RV64I-MEDIUM-NEXT: addi a1, a1, 1
; RV64I-MEDIUM-NEXT: sltu a0, a0, a1
; RV64I-MEDIUM-NEXT: ret
;
@@ -1287,7 +1287,7 @@ define i1 @pr134525() nounwind {
; RV64I-LARGE-NEXT: lui a1, 523776
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: lui a1, 32
-; RV64I-LARGE-NEXT: addiw a1, a1, 1
+; RV64I-LARGE-NEXT: addi a1, a1, 1
; RV64I-LARGE-NEXT: sltu a0, a0, a1
; RV64I-LARGE-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/fold-mem-offset.ll b/llvm/test/CodeGen/RISCV/fold-mem-offset.ll
index 7d8b8d29aa3c9..f4072ffa1e3df 100644
--- a/llvm/test/CodeGen/RISCV/fold-mem-offset.ll
+++ b/llvm/test/CodeGen/RISCV/fold-mem-offset.ll
@@ -575,12 +575,11 @@ define signext i32 @test_large_offset(ptr %p, iXLen %x, iXLen %y) {
; RV64I-NEXT: lui a3, 2
; RV64I-NEXT: slli a1, a1, 2
; RV64I-NEXT: slli a2, a2, 2
-; RV64I-NEXT: addiw a3, a3, -1392
; RV64I-NEXT: add a0, a0, a3
; RV64I-NEXT: add a1, a0, a1
; RV64I-NEXT: add a0, a2, a0
-; RV64I-NEXT: lw a1, 0(a1)
-; RV64I-NEXT: lw a0, 40(a0)
+; RV64I-NEXT: lw a1, -1392(a1)
+; RV64I-NEXT: lw a0, -1352(a0)
; RV64I-NEXT: addw a0, a0, a1
; RV64I-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/fpclamptosat.ll b/llvm/test/CodeGen/RISCV/fpclamptosat.ll
index c5c3b199447a9..117e3e4aac45d 100644
--- a/llvm/test/CodeGen/RISCV/fpclamptosat.ll
+++ b/llvm/test/CodeGen/RISCV/fpclamptosat.ll
@@ -585,7 +585,7 @@ define i16 @stest_f64i16(double %x) {
; RV64IF-NEXT: .cfi_offset ra, -8
; RV64IF-NEXT: call __fixdfsi
; RV64IF-NEXT: lui a1, 8
-; RV64IF-NEXT: addiw a1, a1, -1
+; RV64IF-NEXT: addi a1, a1, -1
; RV64IF-NEXT: blt a0, a1, .LBB9_2
; RV64IF-NEXT: # %bb.1: # %entry
; RV64IF-NEXT: mv a0, a1
@@ -624,7 +624,7 @@ define i16 @stest_f64i16(double %x) {
; RV64IFD: # %bb.0: # %entry
; RV64IFD-NEXT: fcvt.w.d a0, fa0, rtz
; RV64IFD-NEXT: lui a1, 8
-; RV64IFD-NEXT: addiw a1, a1, -1
+; RV64IFD-NEXT: addi a1, a1, -1
; RV64IFD-NEXT: bge a0, a1, .LBB9_3
; RV64IFD-NEXT: # %bb.1: # %entry
; RV64IFD-NEXT: lui a1, 1048568
@@ -676,7 +676,7 @@ define i16 @utest_f64i16(double %x) {
; RV64IF-NEXT: .cfi_offset ra, -8
; RV64IF-NEXT: call __fixunsdfsi
; RV64IF-NEXT: lui a1, 16
-; RV64IF-NEXT: addiw a1, a1, -1
+; RV64IF-NEXT: addi a1, a1, -1
; RV64IF-NEXT: bltu a0, a1, .LBB10_2
; RV64IF-NEXT: # %bb.1: # %entry
; RV64IF-NEXT: mv a0, a1
@@ -702,7 +702,7 @@ define i16 @utest_f64i16(double %x) {
; RV64IFD: # %bb.0: # %entry
; RV64IFD-NEXT: fcvt.wu.d a0, fa0, rtz
; RV64IFD-NEXT: lui a1, 16
-; RV64IFD-NEXT: addiw a1, a1, -1
+; RV64IFD-NEXT: addi a1, a1, -1
; RV64IFD-NEXT: bltu a0, a1, .LBB10_2
; RV64IFD-NEXT: # %bb.1: # %entry
; RV64IFD-NEXT: mv a0, a1
@@ -747,7 +747,7 @@ define i16 @ustest_f64i16(double %x) {
; RV64IF-NEXT: .cfi_offset ra, -8
; RV64IF-NEXT: call __fixdfsi
; RV64IF-NEXT: lui a1, 16
-; RV64IF-NEXT: addiw a1, a1, -1
+; RV64IF-NEXT: addi a1, a1, -1
; RV64IF-NEXT: blt a0, a1, .LBB11_2
; RV64IF-NEXT: # %bb.1: # %entry
; RV64IF-NEXT: mv a0, a1
@@ -779,7 +779,7 @@ define i16 @ustest_f64i16(double %x) {
; RV64IFD: # %bb.0: # %entry
; RV64IFD-NEXT: fcvt.w.d a0, fa0, rtz
; RV64IFD-NEXT: lui a1, 16
-; RV64IFD-NEXT: addiw a1, a1, -1
+; RV64IFD-NEXT: addi a1, a1, -1
; RV64IFD-NEXT: blt a0, a1, .LBB11_2
; RV64IFD-NEXT: # %bb.1: # %entry
; RV64IFD-NEXT: mv a0, a1
@@ -822,7 +822,7 @@ define i16 @stest_f32i16(float %x) {
; RV64: # %bb.0: # %entry
; RV64-NEXT: fcvt.w.s a0, fa0, rtz
; RV64-NEXT: lui a1, 8
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: bge a0, a1, .LBB12_3
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: lui a1, 1048568
@@ -862,7 +862,7 @@ define i16 @utest_f32i16(float %x) {
; RV64: # %bb.0: # %entry
; RV64-NEXT: fcvt.wu.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: bltu a0, a1, .LBB13_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -895,7 +895,7 @@ define i16 @ustest_f32i16(float %x) {
; RV64: # %bb.0: # %entry
; RV64-NEXT: fcvt.w.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: blt a0, a1, .LBB14_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -949,7 +949,7 @@ define i16 @stest_f16i16(half %x) {
; RV64-NEXT: call __extendhfsf2
; RV64-NEXT: fcvt.l.s a0, fa0, rtz
; RV64-NEXT: lui a1, 8
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: blt a0, a1, .LBB15_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -1004,7 +1004,7 @@ define i16 @utesth_f16i16(half %x) {
; RV64-NEXT: call __extendhfsf2
; RV64-NEXT: fcvt.lu.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: bltu a0, a1, .LBB16_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -1055,7 +1055,7 @@ define i16 @ustest_f16i16(half %x) {
; RV64-NEXT: call __extendhfsf2
; RV64-NEXT: fcvt.l.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: blt a0, a1, .LBB17_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -2536,7 +2536,7 @@ define i16 @stest_f64i16_mm(double %x) {
; RV64IF-NEXT: .cfi_offset ra, -8
; RV64IF-NEXT: call __fixdfsi
; RV64IF-NEXT: lui a1, 8
-; RV64IF-NEXT: addiw a1, a1, -1
+; RV64IF-NEXT: addi a1, a1, -1
; RV64IF-NEXT: blt a0, a1, .LBB36_2
; RV64IF-NEXT: # %bb.1: # %entry
; RV64IF-NEXT: mv a0, a1
@@ -2575,7 +2575,7 @@ define i16 @stest_f64i16_mm(double %x) {
; RV64IFD: # %bb.0: # %entry
; RV64IFD-NEXT: fcvt.w.d a0, fa0, rtz
; RV64IFD-NEXT: lui a1, 8
-; RV64IFD-NEXT: addiw a1, a1, -1
+; RV64IFD-NEXT: addi a1, a1, -1
; RV64IFD-NEXT: bge a0, a1, .LBB36_3
; RV64IFD-NEXT: # %bb.1: # %entry
; RV64IFD-NEXT: lui a1, 1048568
@@ -2625,7 +2625,7 @@ define i16 @utest_f64i16_mm(double %x) {
; RV64IF-NEXT: .cfi_offset ra, -8
; RV64IF-NEXT: call __fixunsdfsi
; RV64IF-NEXT: lui a1, 16
-; RV64IF-NEXT: addiw a1, a1, -1
+; RV64IF-NEXT: addi a1, a1, -1
; RV64IF-NEXT: bltu a0, a1, .LBB37_2
; RV64IF-NEXT: # %bb.1: # %entry
; RV64IF-NEXT: mv a0, a1
@@ -2651,7 +2651,7 @@ define i16 @utest_f64i16_mm(double %x) {
; RV64IFD: # %bb.0: # %entry
; RV64IFD-NEXT: fcvt.wu.d a0, fa0, rtz
; RV64IFD-NEXT: lui a1, 16
-; RV64IFD-NEXT: addiw a1, a1, -1
+; RV64IFD-NEXT: addi a1, a1, -1
; RV64IFD-NEXT: bltu a0, a1, .LBB37_2
; RV64IFD-NEXT: # %bb.1: # %entry
; RV64IFD-NEXT: mv a0, a1
@@ -2695,7 +2695,7 @@ define i16 @ustest_f64i16_mm(double %x) {
; RV64IF-NEXT: .cfi_offset ra, -8
; RV64IF-NEXT: call __fixdfsi
; RV64IF-NEXT: lui a1, 16
-; RV64IF-NEXT: addiw a1, a1, -1
+; RV64IF-NEXT: addi a1, a1, -1
; RV64IF-NEXT: blt a0, a1, .LBB38_2
; RV64IF-NEXT: # %bb.1: # %entry
; RV64IF-NEXT: mv a0, a1
@@ -2727,7 +2727,7 @@ define i16 @ustest_f64i16_mm(double %x) {
; RV64IFD: # %bb.0: # %entry
; RV64IFD-NEXT: fcvt.w.d a0, fa0, rtz
; RV64IFD-NEXT: lui a1, 16
-; RV64IFD-NEXT: addiw a1, a1, -1
+; RV64IFD-NEXT: addi a1, a1, -1
; RV64IFD-NEXT: blt a0, a1, .LBB38_2
; RV64IFD-NEXT: # %bb.1: # %entry
; RV64IFD-NEXT: mv a0, a1
@@ -2768,7 +2768,7 @@ define i16 @stest_f32i16_mm(float %x) {
; RV64: # %bb.0: # %entry
; RV64-NEXT: fcvt.w.s a0, fa0, rtz
; RV64-NEXT: lui a1, 8
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: bge a0, a1, .LBB39_3
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: lui a1, 1048568
@@ -2806,7 +2806,7 @@ define i16 @utest_f32i16_mm(float %x) {
; RV64: # %bb.0: # %entry
; RV64-NEXT: fcvt.wu.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: bltu a0, a1, .LBB40_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -2838,7 +2838,7 @@ define i16 @ustest_f32i16_mm(float %x) {
; RV64: # %bb.0: # %entry
; RV64-NEXT: fcvt.w.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: blt a0, a1, .LBB41_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -2890,7 +2890,7 @@ define i16 @stest_f16i16_mm(half %x) {
; RV64-NEXT: call __extendhfsf2
; RV64-NEXT: fcvt.l.s a0, fa0, rtz
; RV64-NEXT: lui a1, 8
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: blt a0, a1, .LBB42_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -2943,7 +2943,7 @@ define i16 @utesth_f16i16_mm(half %x) {
; RV64-NEXT: call __extendhfsf2
; RV64-NEXT: fcvt.lu.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: bltu a0, a1, .LBB43_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
@@ -2993,7 +2993,7 @@ define i16 @ustest_f16i16_mm(half %x) {
; RV64-NEXT: call __extendhfsf2
; RV64-NEXT: fcvt.l.s a0, fa0, rtz
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: blt a0, a1, .LBB44_2
; RV64-NEXT: # %bb.1: # %entry
; RV64-NEXT: mv a0, a1
diff --git a/llvm/test/CodeGen/RISCV/fpenv.ll b/llvm/test/CodeGen/RISCV/fpenv.ll
index 11e104a290e1b..b4a1400dbd547 100644
--- a/llvm/test/CodeGen/RISCV/fpenv.ll
+++ b/llvm/test/CodeGen/RISCV/fpenv.ll
@@ -18,7 +18,7 @@ define i32 @func_01() {
; RV64IF-NEXT: frrm a0
; RV64IF-NEXT: lui a1, 66
; RV64IF-NEXT: slli a0, a0, 2
-; RV64IF-NEXT: addiw a1, a1, 769
+; RV64IF-NEXT: addi a1, a1, 769
; RV64IF-NEXT: srl a0, a1, a0
; RV64IF-NEXT: andi a0, a0, 7
; RV64IF-NEXT: ret
@@ -77,7 +77,7 @@ define i1 @test_get_rounding_sideeffect() #0 {
; RV64IF-NEXT: frrm a0
; RV64IF-NEXT: lui a1, 66
; RV64IF-NEXT: slli a0, a0, 2
-; RV64IF-NEXT: addiw s0, a1, 769
+; RV64IF-NEXT: addi s0, a1, 769
; RV64IF-NEXT: srl a0, s0, a0
; RV64IF-NEXT: andi a0, a0, 7
; RV64IF-NEXT: beqz a0, .LBB1_2
@@ -133,7 +133,7 @@ define void @func_02(i32 %rm) {
; RV64IF-NEXT: slli a0, a0, 32
; RV64IF-NEXT: lui a1, 66
; RV64IF-NEXT: srli a0, a0, 30
-; RV64IF-NEXT: addiw a1, a1, 769
+; RV64IF-NEXT: addi a1, a1, 769
; RV64IF-NEXT: srl a0, a1, a0
; RV64IF-NEXT: andi a0, a0, 7
; RV64IF-NEXT: fsrm a0
diff --git a/llvm/test/CodeGen/RISCV/half-arith.ll b/llvm/test/CodeGen/RISCV/half-arith.ll
index a218e89948d4b..84163b52bb98d 100644
--- a/llvm/test/CodeGen/RISCV/half-arith.ll
+++ b/llvm/test/CodeGen/RISCV/half-arith.ll
@@ -71,7 +71,7 @@ define half @fadd_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -153,7 +153,7 @@ define half @fsub_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -235,7 +235,7 @@ define half @fmul_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -317,7 +317,7 @@ define half @fdiv_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -546,7 +546,7 @@ define i32 @fneg_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s1, a1, -1
+; RV64I-NEXT: addi s1, a1, -1
; RV64I-NEXT: and a0, a0, s1
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv a1, a0
@@ -667,7 +667,7 @@ define half @fsgnjn_h(half %a, half %b) nounwind {
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a0, 16
-; RV64I-NEXT: addiw s3, a0, -1
+; RV64I-NEXT: addi s3, a0, -1
; RV64I-NEXT: and a0, s1, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -831,7 +831,7 @@ define half @fabs_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -972,7 +972,7 @@ define half @fmin_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -1056,7 +1056,7 @@ define half @fmax_h(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -1149,7 +1149,7 @@ define half @fmadd_h(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -1261,7 +1261,7 @@ define half @fmsub_h(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a0, 16
-; RV64I-NEXT: addiw s3, a0, -1
+; RV64I-NEXT: addi s3, a0, -1
; RV64I-NEXT: and a0, a2, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -1416,7 +1416,7 @@ define half @fnmadd_h(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui s3, 16
-; RV64I-NEXT: addiw s3, s3, -1
+; RV64I-NEXT: addi s3, s3, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -1596,7 +1596,7 @@ define half @fnmadd_h_2(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui s3, 16
-; RV64I-NEXT: addiw s3, s3, -1
+; RV64I-NEXT: addi s3, s3, -1
; RV64I-NEXT: and a0, a1, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -1761,7 +1761,7 @@ define half @fnmadd_h_3(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -1882,7 +1882,7 @@ define half @fnmadd_nsz(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -2003,7 +2003,7 @@ define half @fnmsub_h(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -2140,7 +2140,7 @@ define half @fnmsub_h_2(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a0, 16
-; RV64I-NEXT: addiw s3, a0, -1
+; RV64I-NEXT: addi s3, a0, -1
; RV64I-NEXT: and a0, a1, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -2269,7 +2269,7 @@ define half @fmadd_h_contract(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -2393,7 +2393,7 @@ define half @fmsub_h_contract(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a0, 16
-; RV64I-NEXT: addiw s3, a0, -1
+; RV64I-NEXT: addi s3, a0, -1
; RV64I-NEXT: and a0, a2, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -2552,7 +2552,7 @@ define half @fnmadd_h_contract(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui s3, 16
-; RV64I-NEXT: addiw s3, s3, -1
+; RV64I-NEXT: addi s3, s3, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -2737,7 +2737,7 @@ define half @fnmsub_h_contract(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui s3, 16
-; RV64I-NEXT: addiw s3, s3, -1
+; RV64I-NEXT: addi s3, s3, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: li a1, 0
@@ -2871,7 +2871,7 @@ define half @fsgnjx_f16(half %x, half %y) nounwind {
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a0, 12
-; RV64I-NEXT: addiw a0, a0, -1024
+; RV64I-NEXT: addi a0, a0, -1024
; RV64I-NEXT: and a0, s1, a0
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv a1, s0
diff --git a/llvm/test/CodeGen/RISCV/half-convert.ll b/llvm/test/CodeGen/RISCV/half-convert.ll
index f59f86ded76b4..c53237ed6aef7 100644
--- a/llvm/test/CodeGen/RISCV/half-convert.ll
+++ b/llvm/test/CodeGen/RISCV/half-convert.ll
@@ -272,7 +272,7 @@ define i16 @fcvt_si_h_sat(half %a) nounwind {
; RV64IZHINX-NEXT: lui a2, 290816
; RV64IZHINX-NEXT: fmax.s a1, a0, a1
; RV64IZHINX-NEXT: feq.s a0, a0, a0
-; RV64IZHINX-NEXT: addiw a2, a2, -512
+; RV64IZHINX-NEXT: addi a2, a2, -512
; RV64IZHINX-NEXT: neg a0, a0
; RV64IZHINX-NEXT: fmin.s a1, a1, a2
; RV64IZHINX-NEXT: fcvt.l.s a1, a1, rtz
@@ -300,7 +300,7 @@ define i16 @fcvt_si_h_sat(half %a) nounwind {
; RV64IZDINXZHINX-NEXT: lui a2, 290816
; RV64IZDINXZHINX-NEXT: fmax.s a1, a0, a1
; RV64IZDINXZHINX-NEXT: feq.s a0, a0, a0
-; RV64IZDINXZHINX-NEXT: addiw a2, a2, -512
+; RV64IZDINXZHINX-NEXT: addi a2, a2, -512
; RV64IZDINXZHINX-NEXT: neg a0, a0
; RV64IZDINXZHINX-NEXT: fmin.s a1, a1, a2
; RV64IZDINXZHINX-NEXT: fcvt.l.s a1, a1, rtz
@@ -372,13 +372,13 @@ define i16 @fcvt_si_h_sat(half %a) nounwind {
; RV64I-NEXT: lui s1, 1048568
; RV64I-NEXT: .LBB1_2: # %start
; RV64I-NEXT: lui a0, 290816
-; RV64I-NEXT: addiw a1, a0, -512
+; RV64I-NEXT: addi a1, a0, -512
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB1_4
; RV64I-NEXT: # %bb.3: # %start
; RV64I-NEXT: lui s1, 8
-; RV64I-NEXT: addiw s1, s1, -1
+; RV64I-NEXT: addi s1, s1, -1
; RV64I-NEXT: .LBB1_4: # %start
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: mv a1, s0
@@ -522,7 +522,7 @@ define i16 @fcvt_si_h_sat(half %a) nounwind {
; CHECK64-IZHINXMIN-NEXT: lui a2, 290816
; CHECK64-IZHINXMIN-NEXT: fmax.s a1, a0, a1
; CHECK64-IZHINXMIN-NEXT: feq.s a0, a0, a0
-; CHECK64-IZHINXMIN-NEXT: addiw a2, a2, -512
+; CHECK64-IZHINXMIN-NEXT: addi a2, a2, -512
; CHECK64-IZHINXMIN-NEXT: neg a0, a0
; CHECK64-IZHINXMIN-NEXT: fmin.s a1, a1, a2
; CHECK64-IZHINXMIN-NEXT: fcvt.l.s a1, a1, rtz
@@ -550,7 +550,7 @@ define i16 @fcvt_si_h_sat(half %a) nounwind {
; CHECK64-IZDINXZHINXMIN-NEXT: lui a2, 290816
; CHECK64-IZDINXZHINXMIN-NEXT: fmax.s a1, a0, a1
; CHECK64-IZDINXZHINXMIN-NEXT: feq.s a0, a0, a0
-; CHECK64-IZDINXZHINXMIN-NEXT: addiw a2, a2, -512
+; CHECK64-IZDINXZHINXMIN-NEXT: addi a2, a2, -512
; CHECK64-IZDINXZHINXMIN-NEXT: neg a0, a0
; CHECK64-IZDINXZHINXMIN-NEXT: fmin.s a1, a1, a2
; CHECK64-IZDINXZHINXMIN-NEXT: fcvt.l.s a1, a1, rtz
@@ -768,7 +768,7 @@ define i16 @fcvt_ui_h_sat(half %a) nounwind {
; RV64IZHINX-NEXT: fcvt.s.h a0, a0
; RV64IZHINX-NEXT: lui a1, 292864
; RV64IZHINX-NEXT: fmax.s a0, a0, zero
-; RV64IZHINX-NEXT: addiw a1, a1, -256
+; RV64IZHINX-NEXT: addi a1, a1, -256
; RV64IZHINX-NEXT: fmin.s a0, a0, a1
; RV64IZHINX-NEXT: fcvt.lu.s a0, a0, rtz
; RV64IZHINX-NEXT: ret
@@ -788,7 +788,7 @@ define i16 @fcvt_ui_h_sat(half %a) nounwind {
; RV64IZDINXZHINX-NEXT: fcvt.s.h a0, a0
; RV64IZDINXZHINX-NEXT: lui a1, 292864
; RV64IZDINXZHINX-NEXT: fmax.s a0, a0, zero
-; RV64IZDINXZHINX-NEXT: addiw a1, a1, -256
+; RV64IZDINXZHINX-NEXT: addi a1, a1, -256
; RV64IZDINXZHINX-NEXT: fmin.s a0, a0, a1
; RV64IZDINXZHINX-NEXT: fcvt.lu.s a0, a0, rtz
; RV64IZDINXZHINX-NEXT: ret
@@ -840,7 +840,7 @@ define i16 @fcvt_ui_h_sat(half %a) nounwind {
; RV64I-NEXT: sd s2, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s3, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui s0, 16
-; RV64I-NEXT: addiw s0, s0, -1
+; RV64I-NEXT: addi s0, s0, -1
; RV64I-NEXT: and a0, a0, s0
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s3, a0
@@ -851,7 +851,7 @@ define i16 @fcvt_ui_h_sat(half %a) nounwind {
; RV64I-NEXT: call __gesf2
; RV64I-NEXT: mv s2, a0
; RV64I-NEXT: lui a0, 292864
-; RV64I-NEXT: addiw a1, a0, -256
+; RV64I-NEXT: addi a1, a0, -256
; RV64I-NEXT: mv a0, s3
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: bgtz a0, .LBB3_2
@@ -968,7 +968,7 @@ define i16 @fcvt_ui_h_sat(half %a) nounwind {
; CHECK64-IZHINXMIN-NEXT: fcvt.s.h a0, a0
; CHECK64-IZHINXMIN-NEXT: lui a1, 292864
; CHECK64-IZHINXMIN-NEXT: fmax.s a0, a0, zero
-; CHECK64-IZHINXMIN-NEXT: addiw a1, a1, -256
+; CHECK64-IZHINXMIN-NEXT: addi a1, a1, -256
; CHECK64-IZHINXMIN-NEXT: fmin.s a0, a0, a1
; CHECK64-IZHINXMIN-NEXT: fcvt.lu.s a0, a0, rtz
; CHECK64-IZHINXMIN-NEXT: ret
@@ -988,7 +988,7 @@ define i16 @fcvt_ui_h_sat(half %a) nounwind {
; CHECK64-IZDINXZHINXMIN-NEXT: fcvt.s.h a0, a0
; CHECK64-IZDINXZHINXMIN-NEXT: lui a1, 292864
; CHECK64-IZDINXZHINXMIN-NEXT: fmax.s a0, a0, zero
-; CHECK64-IZDINXZHINXMIN-NEXT: addiw a1, a1, -256
+; CHECK64-IZDINXZHINXMIN-NEXT: addi a1, a1, -256
; CHECK64-IZDINXZHINXMIN-NEXT: fmin.s a0, a0, a1
; CHECK64-IZDINXZHINXMIN-NEXT: fcvt.lu.s a0, a0, rtz
; CHECK64-IZDINXZHINXMIN-NEXT: ret
@@ -1244,7 +1244,7 @@ define i32 @fcvt_w_h_sat(half %a) nounwind {
; RV64I-NEXT: lui s1, 524288
; RV64I-NEXT: .LBB5_2: # %start
; RV64I-NEXT: lui a1, 323584
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB5_4
@@ -1819,7 +1819,7 @@ define i32 @fcvt_wu_h_sat(half %a) nounwind {
; RV64I-NEXT: call __fixunssfdi
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a1, 325632
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB8_2
@@ -2411,7 +2411,7 @@ define i64 @fcvt_l_h_sat(half %a) nounwind {
; RV64I-NEXT: slli s1, s3, 63
; RV64I-NEXT: .LBB10_2: # %start
; RV64I-NEXT: lui a1, 389120
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB10_4
@@ -3076,7 +3076,7 @@ define i64 @fcvt_lu_h_sat(half %a) nounwind {
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a1, 391168
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: sgtz a0, a0
; RV64I-NEXT: neg s1, a0
@@ -6374,7 +6374,7 @@ define signext i16 @fcvt_w_s_sat_i16(half %a) nounwind {
; RV64IZHINX-NEXT: lui a2, 290816
; RV64IZHINX-NEXT: fmax.s a1, a0, a1
; RV64IZHINX-NEXT: feq.s a0, a0, a0
-; RV64IZHINX-NEXT: addiw a2, a2, -512
+; RV64IZHINX-NEXT: addi a2, a2, -512
; RV64IZHINX-NEXT: neg a0, a0
; RV64IZHINX-NEXT: fmin.s a1, a1, a2
; RV64IZHINX-NEXT: fcvt.l.s a1, a1, rtz
@@ -6402,7 +6402,7 @@ define signext i16 @fcvt_w_s_sat_i16(half %a) nounwind {
; RV64IZDINXZHINX-NEXT: lui a2, 290816
; RV64IZDINXZHINX-NEXT: fmax.s a1, a0, a1
; RV64IZDINXZHINX-NEXT: feq.s a0, a0, a0
-; RV64IZDINXZHINX-NEXT: addiw a2, a2, -512
+; RV64IZDINXZHINX-NEXT: addi a2, a2, -512
; RV64IZDINXZHINX-NEXT: neg a0, a0
; RV64IZDINXZHINX-NEXT: fmin.s a1, a1, a2
; RV64IZDINXZHINX-NEXT: fcvt.l.s a1, a1, rtz
@@ -6476,7 +6476,7 @@ define signext i16 @fcvt_w_s_sat_i16(half %a) nounwind {
; RV64I-NEXT: lui s1, 1048568
; RV64I-NEXT: .LBB32_2: # %start
; RV64I-NEXT: lui a0, 290816
-; RV64I-NEXT: addiw a1, a0, -512
+; RV64I-NEXT: addi a1, a0, -512
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB32_4
@@ -6628,7 +6628,7 @@ define signext i16 @fcvt_w_s_sat_i16(half %a) nounwind {
; CHECK64-IZHINXMIN-NEXT: lui a2, 290816
; CHECK64-IZHINXMIN-NEXT: fmax.s a1, a0, a1
; CHECK64-IZHINXMIN-NEXT: feq.s a0, a0, a0
-; CHECK64-IZHINXMIN-NEXT: addiw a2, a2, -512
+; CHECK64-IZHINXMIN-NEXT: addi a2, a2, -512
; CHECK64-IZHINXMIN-NEXT: neg a0, a0
; CHECK64-IZHINXMIN-NEXT: fmin.s a1, a1, a2
; CHECK64-IZHINXMIN-NEXT: fcvt.l.s a1, a1, rtz
@@ -6656,7 +6656,7 @@ define signext i16 @fcvt_w_s_sat_i16(half %a) nounwind {
; CHECK64-IZDINXZHINXMIN-NEXT: lui a2, 290816
; CHECK64-IZDINXZHINXMIN-NEXT: fmax.s a1, a0, a1
; CHECK64-IZDINXZHINXMIN-NEXT: feq.s a0, a0, a0
-; CHECK64-IZDINXZHINXMIN-NEXT: addiw a2, a2, -512
+; CHECK64-IZDINXZHINXMIN-NEXT: addi a2, a2, -512
; CHECK64-IZDINXZHINXMIN-NEXT: neg a0, a0
; CHECK64-IZDINXZHINXMIN-NEXT: fmin.s a1, a1, a2
; CHECK64-IZDINXZHINXMIN-NEXT: fcvt.l.s a1, a1, rtz
@@ -6873,7 +6873,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(half %a) nounwind {
; RV64IZHINX-NEXT: fcvt.s.h a0, a0
; RV64IZHINX-NEXT: lui a1, 292864
; RV64IZHINX-NEXT: fmax.s a0, a0, zero
-; RV64IZHINX-NEXT: addiw a1, a1, -256
+; RV64IZHINX-NEXT: addi a1, a1, -256
; RV64IZHINX-NEXT: fmin.s a0, a0, a1
; RV64IZHINX-NEXT: fcvt.lu.s a0, a0, rtz
; RV64IZHINX-NEXT: ret
@@ -6893,7 +6893,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(half %a) nounwind {
; RV64IZDINXZHINX-NEXT: fcvt.s.h a0, a0
; RV64IZDINXZHINX-NEXT: lui a1, 292864
; RV64IZDINXZHINX-NEXT: fmax.s a0, a0, zero
-; RV64IZDINXZHINX-NEXT: addiw a1, a1, -256
+; RV64IZDINXZHINX-NEXT: addi a1, a1, -256
; RV64IZDINXZHINX-NEXT: fmin.s a0, a0, a1
; RV64IZDINXZHINX-NEXT: fcvt.lu.s a0, a0, rtz
; RV64IZDINXZHINX-NEXT: ret
@@ -6948,7 +6948,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(half %a) nounwind {
; RV64I-NEXT: sd s2, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s3, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui s3, 16
-; RV64I-NEXT: addiw s3, s3, -1
+; RV64I-NEXT: addi s3, s3, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -6959,7 +6959,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(half %a) nounwind {
; RV64I-NEXT: call __gesf2
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a0, 292864
-; RV64I-NEXT: addiw a1, a0, -256
+; RV64I-NEXT: addi a1, a0, -256
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB34_2
@@ -7079,7 +7079,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(half %a) nounwind {
; CHECK64-IZHINXMIN-NEXT: fcvt.s.h a0, a0
; CHECK64-IZHINXMIN-NEXT: lui a1, 292864
; CHECK64-IZHINXMIN-NEXT: fmax.s a0, a0, zero
-; CHECK64-IZHINXMIN-NEXT: addiw a1, a1, -256
+; CHECK64-IZHINXMIN-NEXT: addi a1, a1, -256
; CHECK64-IZHINXMIN-NEXT: fmin.s a0, a0, a1
; CHECK64-IZHINXMIN-NEXT: fcvt.lu.s a0, a0, rtz
; CHECK64-IZHINXMIN-NEXT: ret
@@ -7099,7 +7099,7 @@ define zeroext i16 @fcvt_wu_s_sat_i16(half %a) nounwind {
; CHECK64-IZDINXZHINXMIN-NEXT: fcvt.s.h a0, a0
; CHECK64-IZDINXZHINXMIN-NEXT: lui a1, 292864
; CHECK64-IZDINXZHINXMIN-NEXT: fmax.s a0, a0, zero
-; CHECK64-IZDINXZHINXMIN-NEXT: addiw a1, a1, -256
+; CHECK64-IZDINXZHINXMIN-NEXT: addi a1, a1, -256
; CHECK64-IZDINXZHINXMIN-NEXT: fmin.s a0, a0, a1
; CHECK64-IZDINXZHINXMIN-NEXT: fcvt.lu.s a0, a0, rtz
; CHECK64-IZDINXZHINXMIN-NEXT: ret
@@ -8175,7 +8175,7 @@ define zeroext i32 @fcvt_wu_h_sat_zext(half %a) nounwind {
; RV64I-NEXT: call __fixunssfdi
; RV64I-NEXT: mv s1, a0
; RV64I-NEXT: lui a1, 325632
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB39_2
@@ -8444,7 +8444,7 @@ define signext i32 @fcvt_w_h_sat_sext(half %a) nounwind {
; RV64I-NEXT: lui s1, 524288
; RV64I-NEXT: .LBB40_2: # %start
; RV64I-NEXT: lui a1, 323584
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s0
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: blez a0, .LBB40_4
diff --git a/llvm/test/CodeGen/RISCV/half-imm.ll b/llvm/test/CodeGen/RISCV/half-imm.ll
index 1045df1c3e766..d68e19d15b4bb 100644
--- a/llvm/test/CodeGen/RISCV/half-imm.ll
+++ b/llvm/test/CodeGen/RISCV/half-imm.ll
@@ -38,7 +38,7 @@ define half @half_imm() nounwind {
; RV64IZHINX-LABEL: half_imm:
; RV64IZHINX: # %bb.0:
; RV64IZHINX-NEXT: lui a0, 4
-; RV64IZHINX-NEXT: addiw a0, a0, 512
+; RV64IZHINX-NEXT: addi a0, a0, 512
; RV64IZHINX-NEXT: # kill: def $x10_h killed $x10_h killed $x10
; RV64IZHINX-NEXT: ret
;
@@ -48,19 +48,12 @@ define half @half_imm() nounwind {
; CHECKIZFHMIN-NEXT: flh fa0, %lo(.LCPI0_0)(a0)
; CHECKIZFHMIN-NEXT: ret
;
-; RV32IZHINXMIN-LABEL: half_imm:
-; RV32IZHINXMIN: # %bb.0:
-; RV32IZHINXMIN-NEXT: lui a0, 4
-; RV32IZHINXMIN-NEXT: addi a0, a0, 512
-; RV32IZHINXMIN-NEXT: # kill: def $x10_h killed $x10_h killed $x10
-; RV32IZHINXMIN-NEXT: ret
-;
-; RV64IZHINXMIN-LABEL: half_imm:
-; RV64IZHINXMIN: # %bb.0:
-; RV64IZHINXMIN-NEXT: lui a0, 4
-; RV64IZHINXMIN-NEXT: addiw a0, a0, 512
-; RV64IZHINXMIN-NEXT: # kill: def $x10_h killed $x10_h killed $x10
-; RV64IZHINXMIN-NEXT: ret
+; CHECKIZHINXMIN-LABEL: half_imm:
+; CHECKIZHINXMIN: # %bb.0:
+; CHECKIZHINXMIN-NEXT: lui a0, 4
+; CHECKIZHINXMIN-NEXT: addi a0, a0, 512
+; CHECKIZHINXMIN-NEXT: # kill: def $x10_h killed $x10_h killed $x10
+; CHECKIZHINXMIN-NEXT: ret
ret half 3.0
}
@@ -163,3 +156,6 @@ define half @half_negative_zero(ptr %pf) nounwind {
; CHECKIZHINXMIN-NEXT: ret
ret half -0.0
}
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; RV32IZHINXMIN: {{.*}}
+; RV64IZHINXMIN: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/half-intrinsics.ll b/llvm/test/CodeGen/RISCV/half-intrinsics.ll
index 458bd0dc8d17c..4f0026175e7c7 100644
--- a/llvm/test/CodeGen/RISCV/half-intrinsics.ll
+++ b/llvm/test/CodeGen/RISCV/half-intrinsics.ll
@@ -649,7 +649,7 @@ define half @sincos_f16(half %a) nounwind {
; RV64I-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s0, a0
@@ -905,7 +905,7 @@ define half @pow_f16(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -1748,7 +1748,7 @@ define half @fma_f16(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -1853,7 +1853,7 @@ define half @fmuladd_f16(half %a, half %b, half %c) nounwind {
; RV64I-NEXT: mv s0, a2
; RV64I-NEXT: mv s1, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s3, a1, -1
+; RV64I-NEXT: addi s3, a1, -1
; RV64I-NEXT: and a0, a0, s3
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s2, a0
@@ -2015,7 +2015,7 @@ define half @minnum_f16(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -2099,7 +2099,7 @@ define half @maxnum_f16(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -3143,7 +3143,7 @@ define half @maximumnum_half(half %x, half %y) {
; RV64I-NEXT: .cfi_offset s2, -32
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -3247,7 +3247,7 @@ define half @minimumnum_half(half %x, half %y) {
; RV64I-NEXT: .cfi_offset s2, -32
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
@@ -3990,7 +3990,7 @@ define half @atan2_f16(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
diff --git a/llvm/test/CodeGen/RISCV/i64-icmp.ll b/llvm/test/CodeGen/RISCV/i64-icmp.ll
index e7efe76968cc0..49103231a075f 100644
--- a/llvm/test/CodeGen/RISCV/i64-icmp.ll
+++ b/llvm/test/CodeGen/RISCV/i64-icmp.ll
@@ -28,7 +28,7 @@ define i64 @icmp_eq_constant_2049(i64 %a) nounwind {
; RV64I-LABEL: icmp_eq_constant_2049:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, -2047
+; RV64I-NEXT: addi a1, a1, -2047
; RV64I-NEXT: xor a0, a0, a1
; RV64I-NEXT: seqz a0, a0
; RV64I-NEXT: ret
@@ -106,7 +106,7 @@ define i64 @icmp_ne_constant_2049(i64 %a) nounwind {
; RV64I-LABEL: icmp_ne_constant_2049:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, -2047
+; RV64I-NEXT: addi a1, a1, -2047
; RV64I-NEXT: xor a0, a0, a1
; RV64I-NEXT: snez a0, a0
; RV64I-NEXT: ret
@@ -226,7 +226,7 @@ define i64 @icmp_ugt_constant_neg_2050(i64 %a) nounwind {
; RV64I-LABEL: icmp_ugt_constant_neg_2050:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2046
+; RV64I-NEXT: addi a1, a1, 2046
; RV64I-NEXT: sltu a0, a1, a0
; RV64I-NEXT: ret
; 18446744073709549566 signed extend is -2050
@@ -294,7 +294,7 @@ define i64 @icmp_uge_constant_neg_2049(i64 %a) nounwind {
; RV64I-LABEL: icmp_uge_constant_neg_2049:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2046
+; RV64I-NEXT: addi a1, a1, 2046
; RV64I-NEXT: sltu a0, a1, a0
; RV64I-NEXT: ret
; 18446744073709549567 signed extend is -2049
@@ -359,7 +359,7 @@ define i64 @icmp_ult_constant_neg_2049(i64 %a) nounwind {
; RV64I-LABEL: icmp_ult_constant_neg_2049:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2047
+; RV64I-NEXT: addi a1, a1, 2047
; RV64I-NEXT: sltu a0, a0, a1
; RV64I-NEXT: ret
; 18446744073709549567 signed extend is -2049
@@ -425,7 +425,7 @@ define i64 @icmp_ule_constant_neg_2050(i64 %a) nounwind {
; RV64I-LABEL: icmp_ule_constant_neg_2050:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2047
+; RV64I-NEXT: addi a1, a1, 2047
; RV64I-NEXT: sltu a0, a0, a1
; RV64I-NEXT: ret
; 18446744073709549566 signed extend is -2050
@@ -491,7 +491,7 @@ define i64 @icmp_sgt_constant_neg_2050(i64 %a) nounwind {
; RV64I-LABEL: icmp_sgt_constant_neg_2050:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2046
+; RV64I-NEXT: addi a1, a1, 2046
; RV64I-NEXT: slt a0, a1, a0
; RV64I-NEXT: ret
%1 = icmp sgt i64 %a, -2050
@@ -621,7 +621,7 @@ define i64 @icmp_slt_constant_neg_2049(i64 %a) nounwind {
; RV64I-LABEL: icmp_slt_constant_neg_2049:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2047
+; RV64I-NEXT: addi a1, a1, 2047
; RV64I-NEXT: slt a0, a0, a1
; RV64I-NEXT: ret
%1 = icmp slt i64 %a, -2049
@@ -686,7 +686,7 @@ define i64 @icmp_sle_constant_neg_2050(i64 %a) nounwind {
; RV64I-LABEL: icmp_sle_constant_neg_2050:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2047
+; RV64I-NEXT: addi a1, a1, 2047
; RV64I-NEXT: slt a0, a0, a1
; RV64I-NEXT: ret
%1 = icmp sle i64 %a, -2050
@@ -712,7 +712,7 @@ define i64 @icmp_eq_zext_inreg_large_constant(i64 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: sext.w a0, a0
; RV64I-NEXT: lui a1, 563901
-; RV64I-NEXT: addiw a1, a1, -529
+; RV64I-NEXT: addi a1, a1, -529
; RV64I-NEXT: xor a0, a0, a1
; RV64I-NEXT: seqz a0, a0
; RV64I-NEXT: ret
@@ -753,7 +753,7 @@ define i64 @icmp_ne_zext_inreg_umin(i64 %a) nounwind {
; RV64I-LABEL: icmp_ne_zext_inreg_umin:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 30141
-; RV64I-NEXT: addiw a1, a1, -747
+; RV64I-NEXT: addi a1, a1, -747
; RV64I-NEXT: bltu a0, a1, .LBB67_2
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: mv a0, a1
diff --git a/llvm/test/CodeGen/RISCV/imm.ll b/llvm/test/CodeGen/RISCV/imm.ll
index f324a9bc120ef..876899ba48347 100644
--- a/llvm/test/CodeGen/RISCV/imm.ll
+++ b/llvm/test/CodeGen/RISCV/imm.ll
@@ -186,31 +186,31 @@ define signext i32 @pos_i32() nounwind {
; RV64I-LABEL: pos_i32:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 423811
-; RV64I-NEXT: addiw a0, a0, -1297
+; RV64I-NEXT: addi a0, a0, -1297
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: pos_i32:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 423811
-; RV64IZBA-NEXT: addiw a0, a0, -1297
+; RV64IZBA-NEXT: addi a0, a0, -1297
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: pos_i32:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 423811
-; RV64IZBB-NEXT: addiw a0, a0, -1297
+; RV64IZBB-NEXT: addi a0, a0, -1297
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: pos_i32:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 423811
-; RV64IZBS-NEXT: addiw a0, a0, -1297
+; RV64IZBS-NEXT: addi a0, a0, -1297
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: pos_i32:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 423811
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1297
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1297
; RV64IXTHEADBB-NEXT: ret
;
; RV32-REMAT-LABEL: pos_i32:
@@ -222,7 +222,7 @@ define signext i32 @pos_i32() nounwind {
; RV64-REMAT-LABEL: pos_i32:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 423811
-; RV64-REMAT-NEXT: addiw a0, a0, -1297
+; RV64-REMAT-NEXT: addi a0, a0, -1297
; RV64-REMAT-NEXT: ret
ret i32 1735928559
}
@@ -242,31 +242,31 @@ define signext i32 @neg_i32() nounwind {
; RV64I-LABEL: neg_i32:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 912092
-; RV64I-NEXT: addiw a0, a0, -273
+; RV64I-NEXT: addi a0, a0, -273
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: neg_i32:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 912092
-; RV64IZBA-NEXT: addiw a0, a0, -273
+; RV64IZBA-NEXT: addi a0, a0, -273
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: neg_i32:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 912092
-; RV64IZBB-NEXT: addiw a0, a0, -273
+; RV64IZBB-NEXT: addi a0, a0, -273
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: neg_i32:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 912092
-; RV64IZBS-NEXT: addiw a0, a0, -273
+; RV64IZBS-NEXT: addi a0, a0, -273
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: neg_i32:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 912092
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -273
+; RV64IXTHEADBB-NEXT: addi a0, a0, -273
; RV64IXTHEADBB-NEXT: ret
;
; RV32-REMAT-LABEL: neg_i32:
@@ -278,7 +278,7 @@ define signext i32 @neg_i32() nounwind {
; RV64-REMAT-LABEL: neg_i32:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 912092
-; RV64-REMAT-NEXT: addiw a0, a0, -273
+; RV64-REMAT-NEXT: addi a0, a0, -273
; RV64-REMAT-NEXT: ret
ret i32 -559038737
}
@@ -396,31 +396,31 @@ define signext i32 @imm_left_shifted_addi() nounwind {
; RV64I-LABEL: imm_left_shifted_addi:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 32
-; RV64I-NEXT: addiw a0, a0, -64
+; RV64I-NEXT: addi a0, a0, -64
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: imm_left_shifted_addi:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 32
-; RV64IZBA-NEXT: addiw a0, a0, -64
+; RV64IZBA-NEXT: addi a0, a0, -64
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_left_shifted_addi:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 32
-; RV64IZBB-NEXT: addiw a0, a0, -64
+; RV64IZBB-NEXT: addi a0, a0, -64
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: imm_left_shifted_addi:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 32
-; RV64IZBS-NEXT: addiw a0, a0, -64
+; RV64IZBS-NEXT: addi a0, a0, -64
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_left_shifted_addi:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 32
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -64
+; RV64IXTHEADBB-NEXT: addi a0, a0, -64
; RV64IXTHEADBB-NEXT: ret
;
; RV32-REMAT-LABEL: imm_left_shifted_addi:
@@ -432,7 +432,7 @@ define signext i32 @imm_left_shifted_addi() nounwind {
; RV64-REMAT-LABEL: imm_left_shifted_addi:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 32
-; RV64-REMAT-NEXT: addiw a0, a0, -64
+; RV64-REMAT-NEXT: addi a0, a0, -64
; RV64-REMAT-NEXT: ret
ret i32 131008 ; 0x1FFC0
}
@@ -512,31 +512,31 @@ define signext i32 @imm_right_shifted_lui() nounwind {
; RV64I-LABEL: imm_right_shifted_lui:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 56
-; RV64I-NEXT: addiw a0, a0, 580
+; RV64I-NEXT: addi a0, a0, 580
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: imm_right_shifted_lui:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 56
-; RV64IZBA-NEXT: addiw a0, a0, 580
+; RV64IZBA-NEXT: addi a0, a0, 580
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_right_shifted_lui:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 56
-; RV64IZBB-NEXT: addiw a0, a0, 580
+; RV64IZBB-NEXT: addi a0, a0, 580
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: imm_right_shifted_lui:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 56
-; RV64IZBS-NEXT: addiw a0, a0, 580
+; RV64IZBS-NEXT: addi a0, a0, 580
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_right_shifted_lui:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 56
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 580
+; RV64IXTHEADBB-NEXT: addi a0, a0, 580
; RV64IXTHEADBB-NEXT: ret
;
; RV32-REMAT-LABEL: imm_right_shifted_lui:
@@ -548,7 +548,7 @@ define signext i32 @imm_right_shifted_lui() nounwind {
; RV64-REMAT-LABEL: imm_right_shifted_lui:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 56
-; RV64-REMAT-NEXT: addiw a0, a0, 580
+; RV64-REMAT-NEXT: addi a0, a0, 580
; RV64-REMAT-NEXT: ret
ret i32 229956 ; 0x38244
}
@@ -1664,7 +1664,7 @@ define i64 @imm_2reg_1() nounwind {
; RV64I-LABEL: imm_2reg_1:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 74565
-; RV64I-NEXT: addiw a0, a0, 1656
+; RV64I-NEXT: addi a0, a0, 1656
; RV64I-NEXT: slli a1, a0, 57
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
@@ -1672,7 +1672,7 @@ define i64 @imm_2reg_1() nounwind {
; RV64IZBA-LABEL: imm_2reg_1:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 74565
-; RV64IZBA-NEXT: addiw a0, a0, 1656
+; RV64IZBA-NEXT: addi a0, a0, 1656
; RV64IZBA-NEXT: slli a1, a0, 57
; RV64IZBA-NEXT: add a0, a0, a1
; RV64IZBA-NEXT: ret
@@ -1680,7 +1680,7 @@ define i64 @imm_2reg_1() nounwind {
; RV64IZBB-LABEL: imm_2reg_1:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 74565
-; RV64IZBB-NEXT: addiw a0, a0, 1656
+; RV64IZBB-NEXT: addi a0, a0, 1656
; RV64IZBB-NEXT: slli a1, a0, 57
; RV64IZBB-NEXT: add a0, a0, a1
; RV64IZBB-NEXT: ret
@@ -1688,7 +1688,7 @@ define i64 @imm_2reg_1() nounwind {
; RV64IZBS-LABEL: imm_2reg_1:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 74565
-; RV64IZBS-NEXT: addiw a0, a0, 1656
+; RV64IZBS-NEXT: addi a0, a0, 1656
; RV64IZBS-NEXT: slli a1, a0, 57
; RV64IZBS-NEXT: add a0, a0, a1
; RV64IZBS-NEXT: ret
@@ -1696,7 +1696,7 @@ define i64 @imm_2reg_1() nounwind {
; RV64IXTHEADBB-LABEL: imm_2reg_1:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 74565
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 1656
+; RV64IXTHEADBB-NEXT: addi a0, a0, 1656
; RV64IXTHEADBB-NEXT: slli a1, a0, 57
; RV64IXTHEADBB-NEXT: add a0, a0, a1
; RV64IXTHEADBB-NEXT: ret
@@ -1711,7 +1711,7 @@ define i64 @imm_2reg_1() nounwind {
; RV64-REMAT-LABEL: imm_2reg_1:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 74565
-; RV64-REMAT-NEXT: addiw a0, a0, 1656
+; RV64-REMAT-NEXT: addi a0, a0, 1656
; RV64-REMAT-NEXT: slli a1, a0, 57
; RV64-REMAT-NEXT: add a0, a0, a1
; RV64-REMAT-NEXT: ret
@@ -1909,7 +1909,7 @@ define i64 @imm_5372288229() {
; RV64I-LABEL: imm_5372288229:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 160
-; RV64I-NEXT: addiw a0, a0, 437
+; RV64I-NEXT: addi a0, a0, 437
; RV64I-NEXT: slli a0, a0, 13
; RV64I-NEXT: addi a0, a0, -795
; RV64I-NEXT: ret
@@ -1924,7 +1924,7 @@ define i64 @imm_5372288229() {
; RV64IZBB-LABEL: imm_5372288229:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 160
-; RV64IZBB-NEXT: addiw a0, a0, 437
+; RV64IZBB-NEXT: addi a0, a0, 437
; RV64IZBB-NEXT: slli a0, a0, 13
; RV64IZBB-NEXT: addi a0, a0, -795
; RV64IZBB-NEXT: ret
@@ -1932,14 +1932,14 @@ define i64 @imm_5372288229() {
; RV64IZBS-LABEL: imm_5372288229:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 263018
-; RV64IZBS-NEXT: addiw a0, a0, -795
+; RV64IZBS-NEXT: addi a0, a0, -795
; RV64IZBS-NEXT: bseti a0, a0, 32
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_5372288229:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 160
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 437
+; RV64IXTHEADBB-NEXT: addi a0, a0, 437
; RV64IXTHEADBB-NEXT: slli a0, a0, 13
; RV64IXTHEADBB-NEXT: addi a0, a0, -795
; RV64IXTHEADBB-NEXT: ret
@@ -1954,7 +1954,7 @@ define i64 @imm_5372288229() {
; RV64-REMAT-LABEL: imm_5372288229:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 160
-; RV64-REMAT-NEXT: addiw a0, a0, 437
+; RV64-REMAT-NEXT: addi a0, a0, 437
; RV64-REMAT-NEXT: slli a0, a0, 13
; RV64-REMAT-NEXT: addi a0, a0, -795
; RV64-REMAT-NEXT: ret
@@ -1978,7 +1978,7 @@ define i64 @imm_neg_5372288229() {
; RV64I-LABEL: imm_neg_5372288229:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1048416
-; RV64I-NEXT: addiw a0, a0, -437
+; RV64I-NEXT: addi a0, a0, -437
; RV64I-NEXT: slli a0, a0, 13
; RV64I-NEXT: addi a0, a0, 795
; RV64I-NEXT: ret
@@ -1986,14 +1986,14 @@ define i64 @imm_neg_5372288229() {
; RV64IZBA-LABEL: imm_neg_5372288229:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 611378
-; RV64IZBA-NEXT: addiw a0, a0, 265
+; RV64IZBA-NEXT: addi a0, a0, 265
; RV64IZBA-NEXT: sh1add a0, a0, a0
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_neg_5372288229:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1048416
-; RV64IZBB-NEXT: addiw a0, a0, -437
+; RV64IZBB-NEXT: addi a0, a0, -437
; RV64IZBB-NEXT: slli a0, a0, 13
; RV64IZBB-NEXT: addi a0, a0, 795
; RV64IZBB-NEXT: ret
@@ -2001,14 +2001,14 @@ define i64 @imm_neg_5372288229() {
; RV64IZBS-LABEL: imm_neg_5372288229:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 785558
-; RV64IZBS-NEXT: addiw a0, a0, 795
+; RV64IZBS-NEXT: addi a0, a0, 795
; RV64IZBS-NEXT: bclri a0, a0, 32
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_5372288229:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1048416
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -437
+; RV64IXTHEADBB-NEXT: addi a0, a0, -437
; RV64IXTHEADBB-NEXT: slli a0, a0, 13
; RV64IXTHEADBB-NEXT: addi a0, a0, 795
; RV64IXTHEADBB-NEXT: ret
@@ -2023,7 +2023,7 @@ define i64 @imm_neg_5372288229() {
; RV64-REMAT-LABEL: imm_neg_5372288229:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1048416
-; RV64-REMAT-NEXT: addiw a0, a0, -437
+; RV64-REMAT-NEXT: addi a0, a0, -437
; RV64-REMAT-NEXT: slli a0, a0, 13
; RV64-REMAT-NEXT: addi a0, a0, 795
; RV64-REMAT-NEXT: ret
@@ -2047,7 +2047,7 @@ define i64 @imm_8953813715() {
; RV64I-LABEL: imm_8953813715:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 267
-; RV64I-NEXT: addiw a0, a0, -637
+; RV64I-NEXT: addi a0, a0, -637
; RV64I-NEXT: slli a0, a0, 13
; RV64I-NEXT: addi a0, a0, -1325
; RV64I-NEXT: ret
@@ -2055,14 +2055,14 @@ define i64 @imm_8953813715() {
; RV64IZBA-LABEL: imm_8953813715:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 437198
-; RV64IZBA-NEXT: addiw a0, a0, -265
+; RV64IZBA-NEXT: addi a0, a0, -265
; RV64IZBA-NEXT: sh2add a0, a0, a0
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_8953813715:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 267
-; RV64IZBB-NEXT: addiw a0, a0, -637
+; RV64IZBB-NEXT: addi a0, a0, -637
; RV64IZBB-NEXT: slli a0, a0, 13
; RV64IZBB-NEXT: addi a0, a0, -1325
; RV64IZBB-NEXT: ret
@@ -2070,14 +2070,14 @@ define i64 @imm_8953813715() {
; RV64IZBS-LABEL: imm_8953813715:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 88838
-; RV64IZBS-NEXT: addiw a0, a0, -1325
+; RV64IZBS-NEXT: addi a0, a0, -1325
; RV64IZBS-NEXT: bseti a0, a0, 33
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_8953813715:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 267
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -637
+; RV64IXTHEADBB-NEXT: addi a0, a0, -637
; RV64IXTHEADBB-NEXT: slli a0, a0, 13
; RV64IXTHEADBB-NEXT: addi a0, a0, -1325
; RV64IXTHEADBB-NEXT: ret
@@ -2092,7 +2092,7 @@ define i64 @imm_8953813715() {
; RV64-REMAT-LABEL: imm_8953813715:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 267
-; RV64-REMAT-NEXT: addiw a0, a0, -637
+; RV64-REMAT-NEXT: addi a0, a0, -637
; RV64-REMAT-NEXT: slli a0, a0, 13
; RV64-REMAT-NEXT: addi a0, a0, -1325
; RV64-REMAT-NEXT: ret
@@ -2116,7 +2116,7 @@ define i64 @imm_neg_8953813715() {
; RV64I-LABEL: imm_neg_8953813715:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1048309
-; RV64I-NEXT: addiw a0, a0, 637
+; RV64I-NEXT: addi a0, a0, 637
; RV64I-NEXT: slli a0, a0, 13
; RV64I-NEXT: addi a0, a0, 1325
; RV64I-NEXT: ret
@@ -2124,14 +2124,14 @@ define i64 @imm_neg_8953813715() {
; RV64IZBA-LABEL: imm_neg_8953813715:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 611378
-; RV64IZBA-NEXT: addiw a0, a0, 265
+; RV64IZBA-NEXT: addi a0, a0, 265
; RV64IZBA-NEXT: sh2add a0, a0, a0
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_neg_8953813715:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1048309
-; RV64IZBB-NEXT: addiw a0, a0, 637
+; RV64IZBB-NEXT: addi a0, a0, 637
; RV64IZBB-NEXT: slli a0, a0, 13
; RV64IZBB-NEXT: addi a0, a0, 1325
; RV64IZBB-NEXT: ret
@@ -2139,14 +2139,14 @@ define i64 @imm_neg_8953813715() {
; RV64IZBS-LABEL: imm_neg_8953813715:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 959738
-; RV64IZBS-NEXT: addiw a0, a0, 1325
+; RV64IZBS-NEXT: addi a0, a0, 1325
; RV64IZBS-NEXT: bclri a0, a0, 33
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_8953813715:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1048309
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 637
+; RV64IXTHEADBB-NEXT: addi a0, a0, 637
; RV64IXTHEADBB-NEXT: slli a0, a0, 13
; RV64IXTHEADBB-NEXT: addi a0, a0, 1325
; RV64IXTHEADBB-NEXT: ret
@@ -2161,7 +2161,7 @@ define i64 @imm_neg_8953813715() {
; RV64-REMAT-LABEL: imm_neg_8953813715:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1048309
-; RV64-REMAT-NEXT: addiw a0, a0, 637
+; RV64-REMAT-NEXT: addi a0, a0, 637
; RV64-REMAT-NEXT: slli a0, a0, 13
; RV64-REMAT-NEXT: addi a0, a0, 1325
; RV64-REMAT-NEXT: ret
@@ -2185,7 +2185,7 @@ define i64 @imm_16116864687() {
; RV64I-LABEL: imm_16116864687:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 961
-; RV64I-NEXT: addiw a0, a0, -1475
+; RV64I-NEXT: addi a0, a0, -1475
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1711
; RV64I-NEXT: ret
@@ -2193,14 +2193,14 @@ define i64 @imm_16116864687() {
; RV64IZBA-LABEL: imm_16116864687:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 437198
-; RV64IZBA-NEXT: addiw a0, a0, -265
+; RV64IZBA-NEXT: addi a0, a0, -265
; RV64IZBA-NEXT: sh3add a0, a0, a0
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_16116864687:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 961
-; RV64IZBB-NEXT: addiw a0, a0, -1475
+; RV64IZBB-NEXT: addi a0, a0, -1475
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1711
; RV64IZBB-NEXT: ret
@@ -2208,7 +2208,7 @@ define i64 @imm_16116864687() {
; RV64IZBS-LABEL: imm_16116864687:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 961
-; RV64IZBS-NEXT: addiw a0, a0, -1475
+; RV64IZBS-NEXT: addi a0, a0, -1475
; RV64IZBS-NEXT: slli a0, a0, 12
; RV64IZBS-NEXT: addi a0, a0, 1711
; RV64IZBS-NEXT: ret
@@ -2216,7 +2216,7 @@ define i64 @imm_16116864687() {
; RV64IXTHEADBB-LABEL: imm_16116864687:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 961
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1475
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1475
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1711
; RV64IXTHEADBB-NEXT: ret
@@ -2231,7 +2231,7 @@ define i64 @imm_16116864687() {
; RV64-REMAT-LABEL: imm_16116864687:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 961
-; RV64-REMAT-NEXT: addiw a0, a0, -1475
+; RV64-REMAT-NEXT: addi a0, a0, -1475
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1711
; RV64-REMAT-NEXT: ret
@@ -2255,7 +2255,7 @@ define i64 @imm_neg_16116864687() {
; RV64I-LABEL: imm_neg_16116864687:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1047615
-; RV64I-NEXT: addiw a0, a0, 1475
+; RV64I-NEXT: addi a0, a0, 1475
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, -1711
; RV64I-NEXT: ret
@@ -2263,14 +2263,14 @@ define i64 @imm_neg_16116864687() {
; RV64IZBA-LABEL: imm_neg_16116864687:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 611378
-; RV64IZBA-NEXT: addiw a0, a0, 265
+; RV64IZBA-NEXT: addi a0, a0, 265
; RV64IZBA-NEXT: sh3add a0, a0, a0
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_neg_16116864687:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1047615
-; RV64IZBB-NEXT: addiw a0, a0, 1475
+; RV64IZBB-NEXT: addi a0, a0, 1475
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, -1711
; RV64IZBB-NEXT: ret
@@ -2278,7 +2278,7 @@ define i64 @imm_neg_16116864687() {
; RV64IZBS-LABEL: imm_neg_16116864687:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 1047615
-; RV64IZBS-NEXT: addiw a0, a0, 1475
+; RV64IZBS-NEXT: addi a0, a0, 1475
; RV64IZBS-NEXT: slli a0, a0, 12
; RV64IZBS-NEXT: addi a0, a0, -1711
; RV64IZBS-NEXT: ret
@@ -2286,7 +2286,7 @@ define i64 @imm_neg_16116864687() {
; RV64IXTHEADBB-LABEL: imm_neg_16116864687:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1047615
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 1475
+; RV64IXTHEADBB-NEXT: addi a0, a0, 1475
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, -1711
; RV64IXTHEADBB-NEXT: ret
@@ -2301,7 +2301,7 @@ define i64 @imm_neg_16116864687() {
; RV64-REMAT-LABEL: imm_neg_16116864687:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1047615
-; RV64-REMAT-NEXT: addiw a0, a0, 1475
+; RV64-REMAT-NEXT: addi a0, a0, 1475
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, -1711
; RV64-REMAT-NEXT: ret
@@ -2390,7 +2390,7 @@ define i64 @imm_70370820078523() {
; RV64-NOPOOL-LABEL: imm_70370820078523:
; RV64-NOPOOL: # %bb.0:
; RV64-NOPOOL-NEXT: lui a0, 256
-; RV64-NOPOOL-NEXT: addiw a0, a0, 31
+; RV64-NOPOOL-NEXT: addi a0, a0, 31
; RV64-NOPOOL-NEXT: slli a0, a0, 12
; RV64-NOPOOL-NEXT: addi a0, a0, -273
; RV64-NOPOOL-NEXT: slli a0, a0, 14
@@ -2406,7 +2406,7 @@ define i64 @imm_70370820078523() {
; RV64IZBA-LABEL: imm_70370820078523:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 256
-; RV64IZBA-NEXT: addiw a0, a0, 31
+; RV64IZBA-NEXT: addi a0, a0, 31
; RV64IZBA-NEXT: slli a0, a0, 12
; RV64IZBA-NEXT: addi a0, a0, -273
; RV64IZBA-NEXT: slli a0, a0, 14
@@ -2416,7 +2416,7 @@ define i64 @imm_70370820078523() {
; RV64IZBB-LABEL: imm_70370820078523:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 256
-; RV64IZBB-NEXT: addiw a0, a0, 31
+; RV64IZBB-NEXT: addi a0, a0, 31
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, -273
; RV64IZBB-NEXT: slli a0, a0, 14
@@ -2426,14 +2426,14 @@ define i64 @imm_70370820078523() {
; RV64IZBS-LABEL: imm_70370820078523:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 506812
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: bseti a0, a0, 46
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_70370820078523:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 256
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 31
+; RV64IXTHEADBB-NEXT: addi a0, a0, 31
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, -273
; RV64IXTHEADBB-NEXT: slli a0, a0, 14
@@ -2450,7 +2450,7 @@ define i64 @imm_70370820078523() {
; RV64-REMAT-LABEL: imm_70370820078523:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 256
-; RV64-REMAT-NEXT: addiw a0, a0, 31
+; RV64-REMAT-NEXT: addi a0, a0, 31
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, -273
; RV64-REMAT-NEXT: slli a0, a0, 14
@@ -2476,7 +2476,7 @@ define i64 @imm_neg_9223372034778874949() {
; RV64I-LABEL: imm_neg_9223372034778874949:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 506812
-; RV64I-NEXT: addiw a0, a0, -1093
+; RV64I-NEXT: addi a0, a0, -1093
; RV64I-NEXT: slli a1, a0, 63
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
@@ -2484,7 +2484,7 @@ define i64 @imm_neg_9223372034778874949() {
; RV64IZBA-LABEL: imm_neg_9223372034778874949:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 506812
-; RV64IZBA-NEXT: addiw a0, a0, -1093
+; RV64IZBA-NEXT: addi a0, a0, -1093
; RV64IZBA-NEXT: slli a1, a0, 63
; RV64IZBA-NEXT: add a0, a0, a1
; RV64IZBA-NEXT: ret
@@ -2492,7 +2492,7 @@ define i64 @imm_neg_9223372034778874949() {
; RV64IZBB-LABEL: imm_neg_9223372034778874949:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 506812
-; RV64IZBB-NEXT: addiw a0, a0, -1093
+; RV64IZBB-NEXT: addi a0, a0, -1093
; RV64IZBB-NEXT: slli a1, a0, 63
; RV64IZBB-NEXT: add a0, a0, a1
; RV64IZBB-NEXT: ret
@@ -2500,14 +2500,14 @@ define i64 @imm_neg_9223372034778874949() {
; RV64IZBS-LABEL: imm_neg_9223372034778874949:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 506812
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: bseti a0, a0, 63
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_9223372034778874949:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 506812
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1093
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1093
; RV64IXTHEADBB-NEXT: slli a1, a0, 63
; RV64IXTHEADBB-NEXT: add a0, a0, a1
; RV64IXTHEADBB-NEXT: ret
@@ -2522,7 +2522,7 @@ define i64 @imm_neg_9223372034778874949() {
; RV64-REMAT-LABEL: imm_neg_9223372034778874949:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 506812
-; RV64-REMAT-NEXT: addiw a0, a0, -1093
+; RV64-REMAT-NEXT: addi a0, a0, -1093
; RV64-REMAT-NEXT: slli a1, a0, 63
; RV64-REMAT-NEXT: add a0, a0, a1
; RV64-REMAT-NEXT: ret
@@ -2585,7 +2585,7 @@ define i64 @imm_neg_9223301666034697285() {
; RV64IZBS-LABEL: imm_neg_9223301666034697285:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 506812
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: bseti a0, a0, 46
; RV64IZBS-NEXT: bseti a0, a0, 63
; RV64IZBS-NEXT: ret
@@ -2704,7 +2704,7 @@ define i64 @imm_neg_8798043653189() {
; RV64I-LABEL: imm_neg_8798043653189:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 917475
-; RV64I-NEXT: addiw a0, a0, -273
+; RV64I-NEXT: addi a0, a0, -273
; RV64I-NEXT: slli a0, a0, 14
; RV64I-NEXT: addi a0, a0, -1093
; RV64I-NEXT: ret
@@ -2712,7 +2712,7 @@ define i64 @imm_neg_8798043653189() {
; RV64IZBA-LABEL: imm_neg_8798043653189:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 917475
-; RV64IZBA-NEXT: addiw a0, a0, -273
+; RV64IZBA-NEXT: addi a0, a0, -273
; RV64IZBA-NEXT: slli a0, a0, 14
; RV64IZBA-NEXT: addi a0, a0, -1093
; RV64IZBA-NEXT: ret
@@ -2720,7 +2720,7 @@ define i64 @imm_neg_8798043653189() {
; RV64IZBB-LABEL: imm_neg_8798043653189:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 917475
-; RV64IZBB-NEXT: addiw a0, a0, -273
+; RV64IZBB-NEXT: addi a0, a0, -273
; RV64IZBB-NEXT: slli a0, a0, 14
; RV64IZBB-NEXT: addi a0, a0, -1093
; RV64IZBB-NEXT: ret
@@ -2728,14 +2728,14 @@ define i64 @imm_neg_8798043653189() {
; RV64IZBS-LABEL: imm_neg_8798043653189:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 572348
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: bclri a0, a0, 43
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_8798043653189:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 917475
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -273
+; RV64IXTHEADBB-NEXT: addi a0, a0, -273
; RV64IXTHEADBB-NEXT: slli a0, a0, 14
; RV64IXTHEADBB-NEXT: addi a0, a0, -1093
; RV64IXTHEADBB-NEXT: ret
@@ -2751,7 +2751,7 @@ define i64 @imm_neg_8798043653189() {
; RV64-REMAT-LABEL: imm_neg_8798043653189:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 917475
-; RV64-REMAT-NEXT: addiw a0, a0, -273
+; RV64-REMAT-NEXT: addi a0, a0, -273
; RV64-REMAT-NEXT: slli a0, a0, 14
; RV64-REMAT-NEXT: addi a0, a0, -1093
; RV64-REMAT-NEXT: ret
@@ -2776,7 +2776,7 @@ define i64 @imm_9223372034904144827() {
; RV64I-LABEL: imm_9223372034904144827:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 572348
-; RV64I-NEXT: addiw a0, a0, -1093
+; RV64I-NEXT: addi a0, a0, -1093
; RV64I-NEXT: slli a1, a0, 63
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
@@ -2784,7 +2784,7 @@ define i64 @imm_9223372034904144827() {
; RV64IZBA-LABEL: imm_9223372034904144827:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 572348
-; RV64IZBA-NEXT: addiw a0, a0, -1093
+; RV64IZBA-NEXT: addi a0, a0, -1093
; RV64IZBA-NEXT: slli a1, a0, 63
; RV64IZBA-NEXT: add a0, a0, a1
; RV64IZBA-NEXT: ret
@@ -2792,7 +2792,7 @@ define i64 @imm_9223372034904144827() {
; RV64IZBB-LABEL: imm_9223372034904144827:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 572348
-; RV64IZBB-NEXT: addiw a0, a0, -1093
+; RV64IZBB-NEXT: addi a0, a0, -1093
; RV64IZBB-NEXT: slli a1, a0, 63
; RV64IZBB-NEXT: add a0, a0, a1
; RV64IZBB-NEXT: ret
@@ -2800,14 +2800,14 @@ define i64 @imm_9223372034904144827() {
; RV64IZBS-LABEL: imm_9223372034904144827:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 572348
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: bclri a0, a0, 63
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_9223372034904144827:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 572348
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1093
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1093
; RV64IXTHEADBB-NEXT: slli a1, a0, 63
; RV64IXTHEADBB-NEXT: add a0, a0, a1
; RV64IXTHEADBB-NEXT: ret
@@ -2823,7 +2823,7 @@ define i64 @imm_9223372034904144827() {
; RV64-REMAT-LABEL: imm_9223372034904144827:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 572348
-; RV64-REMAT-NEXT: addiw a0, a0, -1093
+; RV64-REMAT-NEXT: addi a0, a0, -1093
; RV64-REMAT-NEXT: slli a1, a0, 63
; RV64-REMAT-NEXT: add a0, a0, a1
; RV64-REMAT-NEXT: ret
@@ -2887,7 +2887,7 @@ define i64 @imm_neg_9223354442718100411() {
; RV64IZBS-LABEL: imm_neg_9223354442718100411:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 572348
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: bclri a0, a0, 44
; RV64IZBS-NEXT: bclri a0, a0, 63
; RV64IZBS-NEXT: ret
@@ -2941,35 +2941,35 @@ define i64 @imm_2863311530() {
; RV64I-LABEL: imm_2863311530:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 349525
-; RV64I-NEXT: addiw a0, a0, 1365
+; RV64I-NEXT: addi a0, a0, 1365
; RV64I-NEXT: slli a0, a0, 1
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: imm_2863311530:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 349525
-; RV64IZBA-NEXT: addiw a0, a0, 1365
+; RV64IZBA-NEXT: addi a0, a0, 1365
; RV64IZBA-NEXT: slli a0, a0, 1
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_2863311530:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 349525
-; RV64IZBB-NEXT: addiw a0, a0, 1365
+; RV64IZBB-NEXT: addi a0, a0, 1365
; RV64IZBB-NEXT: slli a0, a0, 1
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: imm_2863311530:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 349525
-; RV64IZBS-NEXT: addiw a0, a0, 1365
+; RV64IZBS-NEXT: addi a0, a0, 1365
; RV64IZBS-NEXT: slli a0, a0, 1
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_2863311530:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 349525
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 1365
+; RV64IXTHEADBB-NEXT: addi a0, a0, 1365
; RV64IXTHEADBB-NEXT: slli a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
;
@@ -2983,7 +2983,7 @@ define i64 @imm_2863311530() {
; RV64-REMAT-LABEL: imm_2863311530:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 349525
-; RV64-REMAT-NEXT: addiw a0, a0, 1365
+; RV64-REMAT-NEXT: addi a0, a0, 1365
; RV64-REMAT-NEXT: slli a0, a0, 1
; RV64-REMAT-NEXT: ret
ret i64 2863311530 ; #0xaaaaaaaa
@@ -3006,35 +3006,35 @@ define i64 @imm_neg_2863311530() {
; RV64I-LABEL: imm_neg_2863311530:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 699051
-; RV64I-NEXT: addiw a0, a0, -1365
+; RV64I-NEXT: addi a0, a0, -1365
; RV64I-NEXT: slli a0, a0, 1
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: imm_neg_2863311530:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 699051
-; RV64IZBA-NEXT: addiw a0, a0, -1365
+; RV64IZBA-NEXT: addi a0, a0, -1365
; RV64IZBA-NEXT: slli a0, a0, 1
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm_neg_2863311530:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 699051
-; RV64IZBB-NEXT: addiw a0, a0, -1365
+; RV64IZBB-NEXT: addi a0, a0, -1365
; RV64IZBB-NEXT: slli a0, a0, 1
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: imm_neg_2863311530:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 699051
-; RV64IZBS-NEXT: addiw a0, a0, -1365
+; RV64IZBS-NEXT: addi a0, a0, -1365
; RV64IZBS-NEXT: slli a0, a0, 1
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_2863311530:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 699051
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1365
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1365
; RV64IXTHEADBB-NEXT: slli a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
;
@@ -3048,7 +3048,7 @@ define i64 @imm_neg_2863311530() {
; RV64-REMAT-LABEL: imm_neg_2863311530:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 699051
-; RV64-REMAT-NEXT: addiw a0, a0, -1365
+; RV64-REMAT-NEXT: addi a0, a0, -1365
; RV64-REMAT-NEXT: slli a0, a0, 1
; RV64-REMAT-NEXT: ret
ret i64 -2863311530 ; #0xffffffff55555556
@@ -3195,7 +3195,7 @@ define i64 @imm_12900924131259() {
; RV64I-LABEL: imm_12900924131259:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 188
-; RV64I-NEXT: addiw a0, a0, -1093
+; RV64I-NEXT: addi a0, a0, -1093
; RV64I-NEXT: slli a0, a0, 24
; RV64I-NEXT: addi a0, a0, 1979
; RV64I-NEXT: ret
@@ -3210,7 +3210,7 @@ define i64 @imm_12900924131259() {
; RV64IZBB-LABEL: imm_12900924131259:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 188
-; RV64IZBB-NEXT: addiw a0, a0, -1093
+; RV64IZBB-NEXT: addi a0, a0, -1093
; RV64IZBB-NEXT: slli a0, a0, 24
; RV64IZBB-NEXT: addi a0, a0, 1979
; RV64IZBB-NEXT: ret
@@ -3218,7 +3218,7 @@ define i64 @imm_12900924131259() {
; RV64IZBS-LABEL: imm_12900924131259:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 188
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: slli a0, a0, 24
; RV64IZBS-NEXT: addi a0, a0, 1979
; RV64IZBS-NEXT: ret
@@ -3226,7 +3226,7 @@ define i64 @imm_12900924131259() {
; RV64IXTHEADBB-LABEL: imm_12900924131259:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 188
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1093
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1093
; RV64IXTHEADBB-NEXT: slli a0, a0, 24
; RV64IXTHEADBB-NEXT: addi a0, a0, 1979
; RV64IXTHEADBB-NEXT: ret
@@ -3242,7 +3242,7 @@ define i64 @imm_12900924131259() {
; RV64-REMAT-LABEL: imm_12900924131259:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 188
-; RV64-REMAT-NEXT: addiw a0, a0, -1093
+; RV64-REMAT-NEXT: addi a0, a0, -1093
; RV64-REMAT-NEXT: slli a0, a0, 24
; RV64-REMAT-NEXT: addi a0, a0, 1979
; RV64-REMAT-NEXT: ret
@@ -3265,7 +3265,7 @@ define i64 @imm_50394234880() {
; RV64I-LABEL: imm_50394234880:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 188
-; RV64I-NEXT: addiw a0, a0, -1093
+; RV64I-NEXT: addi a0, a0, -1093
; RV64I-NEXT: slli a0, a0, 16
; RV64I-NEXT: ret
;
@@ -3278,21 +3278,21 @@ define i64 @imm_50394234880() {
; RV64IZBB-LABEL: imm_50394234880:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 188
-; RV64IZBB-NEXT: addiw a0, a0, -1093
+; RV64IZBB-NEXT: addi a0, a0, -1093
; RV64IZBB-NEXT: slli a0, a0, 16
; RV64IZBB-NEXT: ret
;
; RV64IZBS-LABEL: imm_50394234880:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 188
-; RV64IZBS-NEXT: addiw a0, a0, -1093
+; RV64IZBS-NEXT: addi a0, a0, -1093
; RV64IZBS-NEXT: slli a0, a0, 16
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_50394234880:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 188
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1093
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1093
; RV64IXTHEADBB-NEXT: slli a0, a0, 16
; RV64IXTHEADBB-NEXT: ret
;
@@ -3305,7 +3305,7 @@ define i64 @imm_50394234880() {
; RV64-REMAT-LABEL: imm_50394234880:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 188
-; RV64-REMAT-NEXT: addiw a0, a0, -1093
+; RV64-REMAT-NEXT: addi a0, a0, -1093
; RV64-REMAT-NEXT: slli a0, a0, 16
; RV64-REMAT-NEXT: ret
ret i64 50394234880
@@ -3407,7 +3407,7 @@ define i64 @imm_12900918536874() {
; RV64I-LABEL: imm_12900918536874:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 384477
-; RV64I-NEXT: addiw a0, a0, 1365
+; RV64I-NEXT: addi a0, a0, 1365
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1365
; RV64I-NEXT: slli a0, a0, 1
@@ -3424,7 +3424,7 @@ define i64 @imm_12900918536874() {
; RV64IZBB-LABEL: imm_12900918536874:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 384477
-; RV64IZBB-NEXT: addiw a0, a0, 1365
+; RV64IZBB-NEXT: addi a0, a0, 1365
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1365
; RV64IZBB-NEXT: slli a0, a0, 1
@@ -3433,7 +3433,7 @@ define i64 @imm_12900918536874() {
; RV64IZBS-LABEL: imm_12900918536874:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 384477
-; RV64IZBS-NEXT: addiw a0, a0, 1365
+; RV64IZBS-NEXT: addi a0, a0, 1365
; RV64IZBS-NEXT: slli a0, a0, 12
; RV64IZBS-NEXT: addi a0, a0, 1365
; RV64IZBS-NEXT: slli a0, a0, 1
@@ -3442,7 +3442,7 @@ define i64 @imm_12900918536874() {
; RV64IXTHEADBB-LABEL: imm_12900918536874:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 384477
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 1365
+; RV64IXTHEADBB-NEXT: addi a0, a0, 1365
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1365
; RV64IXTHEADBB-NEXT: slli a0, a0, 1
@@ -3459,7 +3459,7 @@ define i64 @imm_12900918536874() {
; RV64-REMAT-LABEL: imm_12900918536874:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 384477
-; RV64-REMAT-NEXT: addiw a0, a0, 1365
+; RV64-REMAT-NEXT: addi a0, a0, 1365
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1365
; RV64-REMAT-NEXT: slli a0, a0, 1
@@ -3485,7 +3485,7 @@ define i64 @imm_12900925247761() {
; RV64I-LABEL: imm_12900925247761:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 384478
-; RV64I-NEXT: addiw a0, a0, -1911
+; RV64I-NEXT: addi a0, a0, -1911
; RV64I-NEXT: slli a0, a0, 13
; RV64I-NEXT: addi a0, a0, -2048
; RV64I-NEXT: addi a0, a0, -1775
@@ -3502,7 +3502,7 @@ define i64 @imm_12900925247761() {
; RV64IZBB-LABEL: imm_12900925247761:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 384478
-; RV64IZBB-NEXT: addiw a0, a0, -1911
+; RV64IZBB-NEXT: addi a0, a0, -1911
; RV64IZBB-NEXT: slli a0, a0, 13
; RV64IZBB-NEXT: addi a0, a0, -2048
; RV64IZBB-NEXT: addi a0, a0, -1775
@@ -3511,7 +3511,7 @@ define i64 @imm_12900925247761() {
; RV64IZBS-LABEL: imm_12900925247761:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 384478
-; RV64IZBS-NEXT: addiw a0, a0, -1911
+; RV64IZBS-NEXT: addi a0, a0, -1911
; RV64IZBS-NEXT: slli a0, a0, 13
; RV64IZBS-NEXT: addi a0, a0, -2048
; RV64IZBS-NEXT: addi a0, a0, -1775
@@ -3520,7 +3520,7 @@ define i64 @imm_12900925247761() {
; RV64IXTHEADBB-LABEL: imm_12900925247761:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 384478
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1911
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1911
; RV64IXTHEADBB-NEXT: slli a0, a0, 13
; RV64IXTHEADBB-NEXT: addi a0, a0, -2048
; RV64IXTHEADBB-NEXT: addi a0, a0, -1775
@@ -3537,7 +3537,7 @@ define i64 @imm_12900925247761() {
; RV64-REMAT-LABEL: imm_12900925247761:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 384478
-; RV64-REMAT-NEXT: addiw a0, a0, -1911
+; RV64-REMAT-NEXT: addi a0, a0, -1911
; RV64-REMAT-NEXT: slli a0, a0, 13
; RV64-REMAT-NEXT: addi a0, a0, -2048
; RV64-REMAT-NEXT: addi a0, a0, -1775
@@ -3562,7 +3562,7 @@ define i64 @imm_7158272001() {
; RV64I-LABEL: imm_7158272001:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 427
-; RV64I-NEXT: addiw a0, a0, -1367
+; RV64I-NEXT: addi a0, a0, -1367
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1
; RV64I-NEXT: ret
@@ -3577,7 +3577,7 @@ define i64 @imm_7158272001() {
; RV64IZBB-LABEL: imm_7158272001:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 427
-; RV64IZBB-NEXT: addiw a0, a0, -1367
+; RV64IZBB-NEXT: addi a0, a0, -1367
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1
; RV64IZBB-NEXT: ret
@@ -3585,7 +3585,7 @@ define i64 @imm_7158272001() {
; RV64IZBS-LABEL: imm_7158272001:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 427
-; RV64IZBS-NEXT: addiw a0, a0, -1367
+; RV64IZBS-NEXT: addi a0, a0, -1367
; RV64IZBS-NEXT: slli a0, a0, 12
; RV64IZBS-NEXT: addi a0, a0, 1
; RV64IZBS-NEXT: ret
@@ -3593,7 +3593,7 @@ define i64 @imm_7158272001() {
; RV64IXTHEADBB-LABEL: imm_7158272001:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 427
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1367
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1367
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
@@ -3608,7 +3608,7 @@ define i64 @imm_7158272001() {
; RV64-REMAT-LABEL: imm_7158272001:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 427
-; RV64-REMAT-NEXT: addiw a0, a0, -1367
+; RV64-REMAT-NEXT: addi a0, a0, -1367
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1
; RV64-REMAT-NEXT: ret
@@ -3632,7 +3632,7 @@ define i64 @imm_12884889601() {
; RV64I-LABEL: imm_12884889601:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 768
-; RV64I-NEXT: addiw a0, a0, -3
+; RV64I-NEXT: addi a0, a0, -3
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1
; RV64I-NEXT: ret
@@ -3647,7 +3647,7 @@ define i64 @imm_12884889601() {
; RV64IZBB-LABEL: imm_12884889601:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 768
-; RV64IZBB-NEXT: addiw a0, a0, -3
+; RV64IZBB-NEXT: addi a0, a0, -3
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1
; RV64IZBB-NEXT: ret
@@ -3655,7 +3655,7 @@ define i64 @imm_12884889601() {
; RV64IZBS-LABEL: imm_12884889601:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 768
-; RV64IZBS-NEXT: addiw a0, a0, -3
+; RV64IZBS-NEXT: addi a0, a0, -3
; RV64IZBS-NEXT: slli a0, a0, 12
; RV64IZBS-NEXT: addi a0, a0, 1
; RV64IZBS-NEXT: ret
@@ -3663,7 +3663,7 @@ define i64 @imm_12884889601() {
; RV64IXTHEADBB-LABEL: imm_12884889601:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 768
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -3
+; RV64IXTHEADBB-NEXT: addi a0, a0, -3
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
@@ -3678,7 +3678,7 @@ define i64 @imm_12884889601() {
; RV64-REMAT-LABEL: imm_12884889601:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 768
-; RV64-REMAT-NEXT: addiw a0, a0, -3
+; RV64-REMAT-NEXT: addi a0, a0, -3
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1
; RV64-REMAT-NEXT: ret
@@ -3702,7 +3702,7 @@ define i64 @imm_neg_3435982847() {
; RV64I-LABEL: imm_neg_3435982847:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1048371
-; RV64I-NEXT: addiw a0, a0, 817
+; RV64I-NEXT: addi a0, a0, 817
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1
; RV64I-NEXT: ret
@@ -3717,7 +3717,7 @@ define i64 @imm_neg_3435982847() {
; RV64IZBB-LABEL: imm_neg_3435982847:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1048371
-; RV64IZBB-NEXT: addiw a0, a0, 817
+; RV64IZBB-NEXT: addi a0, a0, 817
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1
; RV64IZBB-NEXT: ret
@@ -3725,14 +3725,14 @@ define i64 @imm_neg_3435982847() {
; RV64IZBS-LABEL: imm_neg_3435982847:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 734001
-; RV64IZBS-NEXT: addiw a0, a0, 1
+; RV64IZBS-NEXT: addi a0, a0, 1
; RV64IZBS-NEXT: bclri a0, a0, 31
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_3435982847:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1048371
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 817
+; RV64IXTHEADBB-NEXT: addi a0, a0, 817
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
@@ -3747,7 +3747,7 @@ define i64 @imm_neg_3435982847() {
; RV64-REMAT-LABEL: imm_neg_3435982847:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1048371
-; RV64-REMAT-NEXT: addiw a0, a0, 817
+; RV64-REMAT-NEXT: addi a0, a0, 817
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1
; RV64-REMAT-NEXT: ret
@@ -3771,7 +3771,7 @@ define i64 @imm_neg_5726842879() {
; RV64I-LABEL: imm_neg_5726842879:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1048235
-; RV64I-NEXT: addiw a0, a0, -1419
+; RV64I-NEXT: addi a0, a0, -1419
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1
; RV64I-NEXT: ret
@@ -3786,7 +3786,7 @@ define i64 @imm_neg_5726842879() {
; RV64IZBB-LABEL: imm_neg_5726842879:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1048235
-; RV64IZBB-NEXT: addiw a0, a0, -1419
+; RV64IZBB-NEXT: addi a0, a0, -1419
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1
; RV64IZBB-NEXT: ret
@@ -3794,14 +3794,14 @@ define i64 @imm_neg_5726842879() {
; RV64IZBS-LABEL: imm_neg_5726842879:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 698997
-; RV64IZBS-NEXT: addiw a0, a0, 1
+; RV64IZBS-NEXT: addi a0, a0, 1
; RV64IZBS-NEXT: bclri a0, a0, 32
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_5726842879:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1048235
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1419
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1419
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
@@ -3816,7 +3816,7 @@ define i64 @imm_neg_5726842879() {
; RV64-REMAT-LABEL: imm_neg_5726842879:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1048235
-; RV64-REMAT-NEXT: addiw a0, a0, -1419
+; RV64-REMAT-NEXT: addi a0, a0, -1419
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1
; RV64-REMAT-NEXT: ret
@@ -3840,7 +3840,7 @@ define i64 @imm_neg_10307948543() {
; RV64I-LABEL: imm_neg_10307948543:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1047962
-; RV64I-NEXT: addiw a0, a0, -1645
+; RV64I-NEXT: addi a0, a0, -1645
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: addi a0, a0, 1
; RV64I-NEXT: ret
@@ -3855,7 +3855,7 @@ define i64 @imm_neg_10307948543() {
; RV64IZBB-LABEL: imm_neg_10307948543:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1047962
-; RV64IZBB-NEXT: addiw a0, a0, -1645
+; RV64IZBB-NEXT: addi a0, a0, -1645
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: addi a0, a0, 1
; RV64IZBB-NEXT: ret
@@ -3863,14 +3863,14 @@ define i64 @imm_neg_10307948543() {
; RV64IZBS-LABEL: imm_neg_10307948543:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 629139
-; RV64IZBS-NEXT: addiw a0, a0, 1
+; RV64IZBS-NEXT: addi a0, a0, 1
; RV64IZBS-NEXT: bclri a0, a0, 33
; RV64IZBS-NEXT: ret
;
; RV64IXTHEADBB-LABEL: imm_neg_10307948543:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1047962
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1645
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1645
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: addi a0, a0, 1
; RV64IXTHEADBB-NEXT: ret
@@ -3885,7 +3885,7 @@ define i64 @imm_neg_10307948543() {
; RV64-REMAT-LABEL: imm_neg_10307948543:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1047962
-; RV64-REMAT-NEXT: addiw a0, a0, -1645
+; RV64-REMAT-NEXT: addi a0, a0, -1645
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: addi a0, a0, 1
; RV64-REMAT-NEXT: ret
@@ -4098,7 +4098,7 @@ define i64 @PR54812() {
; RV64I-LABEL: PR54812:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1048447
-; RV64I-NEXT: addiw a0, a0, 1407
+; RV64I-NEXT: addi a0, a0, 1407
; RV64I-NEXT: slli a0, a0, 12
; RV64I-NEXT: ret
;
@@ -4111,7 +4111,7 @@ define i64 @PR54812() {
; RV64IZBB-LABEL: PR54812:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1048447
-; RV64IZBB-NEXT: addiw a0, a0, 1407
+; RV64IZBB-NEXT: addi a0, a0, 1407
; RV64IZBB-NEXT: slli a0, a0, 12
; RV64IZBB-NEXT: ret
;
@@ -4124,7 +4124,7 @@ define i64 @PR54812() {
; RV64IXTHEADBB-LABEL: PR54812:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1048447
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 1407
+; RV64IXTHEADBB-NEXT: addi a0, a0, 1407
; RV64IXTHEADBB-NEXT: slli a0, a0, 12
; RV64IXTHEADBB-NEXT: ret
;
@@ -4137,7 +4137,7 @@ define i64 @PR54812() {
; RV64-REMAT-LABEL: PR54812:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1048447
-; RV64-REMAT-NEXT: addiw a0, a0, 1407
+; RV64-REMAT-NEXT: addi a0, a0, 1407
; RV64-REMAT-NEXT: slli a0, a0, 12
; RV64-REMAT-NEXT: ret
ret i64 -2158497792;
@@ -4215,7 +4215,7 @@ define i64 @imm64_same_lo_hi() nounwind {
; RV64I-LABEL: imm64_same_lo_hi:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 65793
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: slli a1, a0, 32
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
@@ -4223,7 +4223,7 @@ define i64 @imm64_same_lo_hi() nounwind {
; RV64IZBA-LABEL: imm64_same_lo_hi:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 65793
-; RV64IZBA-NEXT: addiw a0, a0, 16
+; RV64IZBA-NEXT: addi a0, a0, 16
; RV64IZBA-NEXT: slli a1, a0, 32
; RV64IZBA-NEXT: add a0, a0, a1
; RV64IZBA-NEXT: ret
@@ -4231,7 +4231,7 @@ define i64 @imm64_same_lo_hi() nounwind {
; RV64IZBB-LABEL: imm64_same_lo_hi:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 65793
-; RV64IZBB-NEXT: addiw a0, a0, 16
+; RV64IZBB-NEXT: addi a0, a0, 16
; RV64IZBB-NEXT: slli a1, a0, 32
; RV64IZBB-NEXT: add a0, a0, a1
; RV64IZBB-NEXT: ret
@@ -4239,7 +4239,7 @@ define i64 @imm64_same_lo_hi() nounwind {
; RV64IZBS-LABEL: imm64_same_lo_hi:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 65793
-; RV64IZBS-NEXT: addiw a0, a0, 16
+; RV64IZBS-NEXT: addi a0, a0, 16
; RV64IZBS-NEXT: slli a1, a0, 32
; RV64IZBS-NEXT: add a0, a0, a1
; RV64IZBS-NEXT: ret
@@ -4247,7 +4247,7 @@ define i64 @imm64_same_lo_hi() nounwind {
; RV64IXTHEADBB-LABEL: imm64_same_lo_hi:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 65793
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 16
+; RV64IXTHEADBB-NEXT: addi a0, a0, 16
; RV64IXTHEADBB-NEXT: slli a1, a0, 32
; RV64IXTHEADBB-NEXT: add a0, a0, a1
; RV64IXTHEADBB-NEXT: ret
@@ -4262,7 +4262,7 @@ define i64 @imm64_same_lo_hi() nounwind {
; RV64-REMAT-LABEL: imm64_same_lo_hi:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 65793
-; RV64-REMAT-NEXT: addiw a0, a0, 16
+; RV64-REMAT-NEXT: addi a0, a0, 16
; RV64-REMAT-NEXT: slli a1, a0, 32
; RV64-REMAT-NEXT: add a0, a0, a1
; RV64-REMAT-NEXT: ret
@@ -4287,7 +4287,7 @@ define i64 @imm64_same_lo_hi_optsize() nounwind optsize {
; RV64-NOPOOL-LABEL: imm64_same_lo_hi_optsize:
; RV64-NOPOOL: # %bb.0:
; RV64-NOPOOL-NEXT: lui a0, 65793
-; RV64-NOPOOL-NEXT: addiw a0, a0, 16
+; RV64-NOPOOL-NEXT: addi a0, a0, 16
; RV64-NOPOOL-NEXT: slli a1, a0, 32
; RV64-NOPOOL-NEXT: add a0, a0, a1
; RV64-NOPOOL-NEXT: ret
@@ -4301,7 +4301,7 @@ define i64 @imm64_same_lo_hi_optsize() nounwind optsize {
; RV64IZBA-LABEL: imm64_same_lo_hi_optsize:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 65793
-; RV64IZBA-NEXT: addiw a0, a0, 16
+; RV64IZBA-NEXT: addi a0, a0, 16
; RV64IZBA-NEXT: slli a1, a0, 32
; RV64IZBA-NEXT: add a0, a0, a1
; RV64IZBA-NEXT: ret
@@ -4309,7 +4309,7 @@ define i64 @imm64_same_lo_hi_optsize() nounwind optsize {
; RV64IZBB-LABEL: imm64_same_lo_hi_optsize:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 65793
-; RV64IZBB-NEXT: addiw a0, a0, 16
+; RV64IZBB-NEXT: addi a0, a0, 16
; RV64IZBB-NEXT: slli a1, a0, 32
; RV64IZBB-NEXT: add a0, a0, a1
; RV64IZBB-NEXT: ret
@@ -4317,7 +4317,7 @@ define i64 @imm64_same_lo_hi_optsize() nounwind optsize {
; RV64IZBS-LABEL: imm64_same_lo_hi_optsize:
; RV64IZBS: # %bb.0:
; RV64IZBS-NEXT: lui a0, 65793
-; RV64IZBS-NEXT: addiw a0, a0, 16
+; RV64IZBS-NEXT: addi a0, a0, 16
; RV64IZBS-NEXT: slli a1, a0, 32
; RV64IZBS-NEXT: add a0, a0, a1
; RV64IZBS-NEXT: ret
@@ -4325,7 +4325,7 @@ define i64 @imm64_same_lo_hi_optsize() nounwind optsize {
; RV64IXTHEADBB-LABEL: imm64_same_lo_hi_optsize:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 65793
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 16
+; RV64IXTHEADBB-NEXT: addi a0, a0, 16
; RV64IXTHEADBB-NEXT: slli a1, a0, 32
; RV64IXTHEADBB-NEXT: add a0, a0, a1
; RV64IXTHEADBB-NEXT: ret
@@ -4340,7 +4340,7 @@ define i64 @imm64_same_lo_hi_optsize() nounwind optsize {
; RV64-REMAT-LABEL: imm64_same_lo_hi_optsize:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 65793
-; RV64-REMAT-NEXT: addiw a0, a0, 16
+; RV64-REMAT-NEXT: addi a0, a0, 16
; RV64-REMAT-NEXT: slli a1, a0, 32
; RV64-REMAT-NEXT: add a0, a0, a1
; RV64-REMAT-NEXT: ret
@@ -4456,21 +4456,21 @@ define i64 @imm64_0x8000080000000() {
; RV64I-LABEL: imm64_0x8000080000000:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 256
-; RV64I-NEXT: addiw a0, a0, 1
+; RV64I-NEXT: addi a0, a0, 1
; RV64I-NEXT: slli a0, a0, 31
; RV64I-NEXT: ret
;
; RV64IZBA-LABEL: imm64_0x8000080000000:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 256
-; RV64IZBA-NEXT: addiw a0, a0, 1
+; RV64IZBA-NEXT: addi a0, a0, 1
; RV64IZBA-NEXT: slli a0, a0, 31
; RV64IZBA-NEXT: ret
;
; RV64IZBB-LABEL: imm64_0x8000080000000:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 256
-; RV64IZBB-NEXT: addiw a0, a0, 1
+; RV64IZBB-NEXT: addi a0, a0, 1
; RV64IZBB-NEXT: slli a0, a0, 31
; RV64IZBB-NEXT: ret
;
@@ -4483,7 +4483,7 @@ define i64 @imm64_0x8000080000000() {
; RV64IXTHEADBB-LABEL: imm64_0x8000080000000:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 256
-; RV64IXTHEADBB-NEXT: addiw a0, a0, 1
+; RV64IXTHEADBB-NEXT: addi a0, a0, 1
; RV64IXTHEADBB-NEXT: slli a0, a0, 31
; RV64IXTHEADBB-NEXT: ret
;
@@ -4496,7 +4496,7 @@ define i64 @imm64_0x8000080000000() {
; RV64-REMAT-LABEL: imm64_0x8000080000000:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 256
-; RV64-REMAT-NEXT: addiw a0, a0, 1
+; RV64-REMAT-NEXT: addi a0, a0, 1
; RV64-REMAT-NEXT: slli a0, a0, 31
; RV64-REMAT-NEXT: ret
ret i64 2251801961168896 ; 0x8000080000000
@@ -4584,7 +4584,7 @@ define i64 @imm64_0xFF7FFFFF7FFFFFFE() {
; RV64I-LABEL: imm64_0xFF7FFFFF7FFFFFFE:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1044480
-; RV64I-NEXT: addiw a0, a0, -1
+; RV64I-NEXT: addi a0, a0, -1
; RV64I-NEXT: slli a0, a0, 31
; RV64I-NEXT: addi a0, a0, -1
; RV64I-NEXT: ret
@@ -4592,7 +4592,7 @@ define i64 @imm64_0xFF7FFFFF7FFFFFFE() {
; RV64IZBA-LABEL: imm64_0xFF7FFFFF7FFFFFFE:
; RV64IZBA: # %bb.0:
; RV64IZBA-NEXT: lui a0, 1044480
-; RV64IZBA-NEXT: addiw a0, a0, -1
+; RV64IZBA-NEXT: addi a0, a0, -1
; RV64IZBA-NEXT: slli a0, a0, 31
; RV64IZBA-NEXT: addi a0, a0, -1
; RV64IZBA-NEXT: ret
@@ -4600,7 +4600,7 @@ define i64 @imm64_0xFF7FFFFF7FFFFFFE() {
; RV64IZBB-LABEL: imm64_0xFF7FFFFF7FFFFFFE:
; RV64IZBB: # %bb.0:
; RV64IZBB-NEXT: lui a0, 1044480
-; RV64IZBB-NEXT: addiw a0, a0, -1
+; RV64IZBB-NEXT: addi a0, a0, -1
; RV64IZBB-NEXT: slli a0, a0, 31
; RV64IZBB-NEXT: addi a0, a0, -1
; RV64IZBB-NEXT: ret
@@ -4615,7 +4615,7 @@ define i64 @imm64_0xFF7FFFFF7FFFFFFE() {
; RV64IXTHEADBB-LABEL: imm64_0xFF7FFFFF7FFFFFFE:
; RV64IXTHEADBB: # %bb.0:
; RV64IXTHEADBB-NEXT: lui a0, 1044480
-; RV64IXTHEADBB-NEXT: addiw a0, a0, -1
+; RV64IXTHEADBB-NEXT: addi a0, a0, -1
; RV64IXTHEADBB-NEXT: slli a0, a0, 31
; RV64IXTHEADBB-NEXT: addi a0, a0, -1
; RV64IXTHEADBB-NEXT: ret
@@ -4631,7 +4631,7 @@ define i64 @imm64_0xFF7FFFFF7FFFFFFE() {
; RV64-REMAT-LABEL: imm64_0xFF7FFFFF7FFFFFFE:
; RV64-REMAT: # %bb.0:
; RV64-REMAT-NEXT: lui a0, 1044480
-; RV64-REMAT-NEXT: addiw a0, a0, -1
+; RV64-REMAT-NEXT: addi a0, a0, -1
; RV64-REMAT-NEXT: slli a0, a0, 31
; RV64-REMAT-NEXT: addi a0, a0, -1
; RV64-REMAT-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/inline-asm-mem-constraint.ll b/llvm/test/CodeGen/RISCV/inline-asm-mem-constraint.ll
index f0837d55bc53f..1152eaffa5c04 100644
--- a/llvm/test/CodeGen/RISCV/inline-asm-mem-constraint.ll
+++ b/llvm/test/CodeGen/RISCV/inline-asm-mem-constraint.ll
@@ -273,7 +273,7 @@ define void @constraint_m_with_global_3() nounwind {
; RV64I-LARGE-NEXT: auipc a0, %pcrel_hi(.LCPI5_0)
; RV64I-LARGE-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi2)(a0)
; RV64I-LARGE-NEXT: lui a1, 2
-; RV64I-LARGE-NEXT: addiw a1, a1, -192
+; RV64I-LARGE-NEXT: addi a1, a1, -192
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: #APP
; RV64I-LARGE-NEXT: sw zero, 0(a0)
@@ -419,7 +419,7 @@ define void @constraint_m_with_extern_weak_global_3() nounwind {
; RV64I-MEDIUM-NEXT: auipc a0, %got_pcrel_hi(ewg)
; RV64I-MEDIUM-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi5)(a0)
; RV64I-MEDIUM-NEXT: lui a1, 2
-; RV64I-MEDIUM-NEXT: addiw a1, a1, -192
+; RV64I-MEDIUM-NEXT: addi a1, a1, -192
; RV64I-MEDIUM-NEXT: add a0, a0, a1
; RV64I-MEDIUM-NEXT: #APP
; RV64I-MEDIUM-NEXT: sw zero, 0(a0)
@@ -432,7 +432,7 @@ define void @constraint_m_with_extern_weak_global_3() nounwind {
; RV64I-LARGE-NEXT: auipc a0, %pcrel_hi(.LCPI8_0)
; RV64I-LARGE-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi5)(a0)
; RV64I-LARGE-NEXT: lui a1, 2
-; RV64I-LARGE-NEXT: addiw a1, a1, -192
+; RV64I-LARGE-NEXT: addi a1, a1, -192
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: #APP
; RV64I-LARGE-NEXT: sw zero, 0(a0)
@@ -1435,7 +1435,7 @@ define void @constraint_o_with_global_3() nounwind {
; RV64I-LARGE-NEXT: auipc a0, %pcrel_hi(.LCPI21_0)
; RV64I-LARGE-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi15)(a0)
; RV64I-LARGE-NEXT: lui a1, 2
-; RV64I-LARGE-NEXT: addiw a1, a1, -192
+; RV64I-LARGE-NEXT: addi a1, a1, -192
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: #APP
; RV64I-LARGE-NEXT: sw zero, 0(a0)
@@ -1581,7 +1581,7 @@ define void @constraint_o_with_extern_weak_global_3() nounwind {
; RV64I-MEDIUM-NEXT: auipc a0, %got_pcrel_hi(ewg)
; RV64I-MEDIUM-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi18)(a0)
; RV64I-MEDIUM-NEXT: lui a1, 2
-; RV64I-MEDIUM-NEXT: addiw a1, a1, -192
+; RV64I-MEDIUM-NEXT: addi a1, a1, -192
; RV64I-MEDIUM-NEXT: add a0, a0, a1
; RV64I-MEDIUM-NEXT: #APP
; RV64I-MEDIUM-NEXT: sw zero, 0(a0)
@@ -1594,7 +1594,7 @@ define void @constraint_o_with_extern_weak_global_3() nounwind {
; RV64I-LARGE-NEXT: auipc a0, %pcrel_hi(.LCPI24_0)
; RV64I-LARGE-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi18)(a0)
; RV64I-LARGE-NEXT: lui a1, 2
-; RV64I-LARGE-NEXT: addiw a1, a1, -192
+; RV64I-LARGE-NEXT: addi a1, a1, -192
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: #APP
; RV64I-LARGE-NEXT: sw zero, 0(a0)
@@ -2500,7 +2500,7 @@ define void @constraint_A_with_global_3() nounwind {
; RV64I-LARGE-NEXT: auipc a0, %pcrel_hi(.LCPI35_0)
; RV64I-LARGE-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi27)(a0)
; RV64I-LARGE-NEXT: lui a1, 2
-; RV64I-LARGE-NEXT: addiw a1, a1, -192
+; RV64I-LARGE-NEXT: addi a1, a1, -192
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: #APP
; RV64I-LARGE-NEXT: sw zero, 0(a0)
@@ -2655,7 +2655,7 @@ define void @constraint_A_with_extern_weak_global_3() nounwind {
; RV64I-MEDIUM-NEXT: auipc a0, %got_pcrel_hi(ewg)
; RV64I-MEDIUM-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi30)(a0)
; RV64I-MEDIUM-NEXT: lui a1, 2
-; RV64I-MEDIUM-NEXT: addiw a1, a1, -192
+; RV64I-MEDIUM-NEXT: addi a1, a1, -192
; RV64I-MEDIUM-NEXT: add a0, a0, a1
; RV64I-MEDIUM-NEXT: #APP
; RV64I-MEDIUM-NEXT: sw zero, 0(a0)
@@ -2668,7 +2668,7 @@ define void @constraint_A_with_extern_weak_global_3() nounwind {
; RV64I-LARGE-NEXT: auipc a0, %pcrel_hi(.LCPI38_0)
; RV64I-LARGE-NEXT: ld a0, %pcrel_lo(.Lpcrel_hi30)(a0)
; RV64I-LARGE-NEXT: lui a1, 2
-; RV64I-LARGE-NEXT: addiw a1, a1, -192
+; RV64I-LARGE-NEXT: addi a1, a1, -192
; RV64I-LARGE-NEXT: add a0, a0, a1
; RV64I-LARGE-NEXT: #APP
; RV64I-LARGE-NEXT: sw zero, 0(a0)
diff --git a/llvm/test/CodeGen/RISCV/lack-of-signed-truncation-check.ll b/llvm/test/CodeGen/RISCV/lack-of-signed-truncation-check.ll
index 4a338ce5bd1f7..64a257c1d578f 100644
--- a/llvm/test/CodeGen/RISCV/lack-of-signed-truncation-check.ll
+++ b/llvm/test/CodeGen/RISCV/lack-of-signed-truncation-check.ll
@@ -766,7 +766,7 @@ define i1 @add_ugecmp_bad_i16_i8_cmp(i16 %x, i16 %y) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: addi a0, a0, 128
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sltu a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll b/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll
index 3c83f5f91b37c..a691fc885704d 100644
--- a/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll
+++ b/llvm/test/CodeGen/RISCV/local-stack-slot-allocation.ll
@@ -26,16 +26,16 @@ define void @use_frame_base_reg() {
; RV64I-LABEL: use_frame_base_reg:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 24
-; RV64I-NEXT: addiw a0, a0, 1712
+; RV64I-NEXT: addi a0, a0, 1712
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 100016
; RV64I-NEXT: lui a0, 24
-; RV64I-NEXT: addiw a0, a0, 1704
+; RV64I-NEXT: addi a0, a0, 1704
; RV64I-NEXT: add a0, sp, a0
; RV64I-NEXT: lbu zero, 4(a0)
; RV64I-NEXT: lbu zero, 0(a0)
; RV64I-NEXT: lui a0, 24
-; RV64I-NEXT: addiw a0, a0, 1712
+; RV64I-NEXT: addi a0, a0, 1712
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/loop-strength-reduce-add-cheaper-than-mul.ll b/llvm/test/CodeGen/RISCV/loop-strength-reduce-add-cheaper-than-mul.ll
index fa8ca071d2189..2ec8b46cf3eea 100644
--- a/llvm/test/CodeGen/RISCV/loop-strength-reduce-add-cheaper-than-mul.ll
+++ b/llvm/test/CodeGen/RISCV/loop-strength-reduce-add-cheaper-than-mul.ll
@@ -57,7 +57,7 @@ define void @test(i32 signext %i) nounwind {
; RV64-NEXT: # %bb.1: # %bb.preheader
; RV64-NEXT: lui a2, %hi(flags2)
; RV64-NEXT: addi a2, a2, %lo(flags2)
-; RV64-NEXT: addiw a3, a3, 1
+; RV64-NEXT: addi a3, a3, 1
; RV64-NEXT: .LBB0_2: # %bb
; RV64-NEXT: # =>This Inner Loop Header: Depth=1
; RV64-NEXT: slli a4, a1, 32
diff --git a/llvm/test/CodeGen/RISCV/macro-fusion-lui-addi.ll b/llvm/test/CodeGen/RISCV/macro-fusion-lui-addi.ll
index d1b10af16063a..ed99e6030ebdf 100644
--- a/llvm/test/CodeGen/RISCV/macro-fusion-lui-addi.ll
+++ b/llvm/test/CodeGen/RISCV/macro-fusion-lui-addi.ll
@@ -77,19 +77,19 @@ define i32 @test_matint() {
; FUSION-LABEL: test_matint:
; FUSION: # %bb.0:
; FUSION-NEXT: lui a0, 1
-; FUSION-NEXT: addiw a0, a0, -2048
+; FUSION-NEXT: addi a0, a0, -2048
; FUSION-NEXT: ret
;
; FUSION-POSTRA-LABEL: test_matint:
; FUSION-POSTRA: # %bb.0:
; FUSION-POSTRA-NEXT: lui a0, 1
-; FUSION-POSTRA-NEXT: addiw a0, a0, -2048
+; FUSION-POSTRA-NEXT: addi a0, a0, -2048
; FUSION-POSTRA-NEXT: ret
;
; FUSION-GENERIC-LABEL: test_matint:
; FUSION-GENERIC: # %bb.0:
; FUSION-GENERIC-NEXT: lui a0, 1
-; FUSION-GENERIC-NEXT: addiw a0, a0, -2048
+; FUSION-GENERIC-NEXT: addi a0, a0, -2048
; FUSION-GENERIC-NEXT: ret
ret i32 2048
}
@@ -99,27 +99,27 @@ define void @test_regalloc_hint(i32 noundef signext %0, i32 noundef signext %1)
; NOFUSION: # %bb.0:
; NOFUSION-NEXT: mv a0, a1
; NOFUSION-NEXT: lui a1, 3014
-; NOFUSION-NEXT: addiw a1, a1, 334
+; NOFUSION-NEXT: addi a1, a1, 334
; NOFUSION-NEXT: tail bar
;
; FUSION-LABEL: test_regalloc_hint:
; FUSION: # %bb.0:
; FUSION-NEXT: mv a0, a1
; FUSION-NEXT: lui a1, 3014
-; FUSION-NEXT: addiw a1, a1, 334
+; FUSION-NEXT: addi a1, a1, 334
; FUSION-NEXT: tail bar
;
; FUSION-POSTRA-LABEL: test_regalloc_hint:
; FUSION-POSTRA: # %bb.0:
; FUSION-POSTRA-NEXT: mv a0, a1
; FUSION-POSTRA-NEXT: lui a1, 3014
-; FUSION-POSTRA-NEXT: addiw a1, a1, 334
+; FUSION-POSTRA-NEXT: addi a1, a1, 334
; FUSION-POSTRA-NEXT: tail bar
;
; FUSION-GENERIC-LABEL: test_regalloc_hint:
; FUSION-GENERIC: # %bb.0:
; FUSION-GENERIC-NEXT: lui a2, 3014
-; FUSION-GENERIC-NEXT: addiw a2, a2, 334
+; FUSION-GENERIC-NEXT: addi a2, a2, 334
; FUSION-GENERIC-NEXT: mv a0, a1
; FUSION-GENERIC-NEXT: mv a1, a2
; FUSION-GENERIC-NEXT: tail bar
diff --git a/llvm/test/CodeGen/RISCV/memset-inline.ll b/llvm/test/CodeGen/RISCV/memset-inline.ll
index aa397504d909c..12638927c9075 100644
--- a/llvm/test/CodeGen/RISCV/memset-inline.ll
+++ b/llvm/test/CodeGen/RISCV/memset-inline.ll
@@ -138,7 +138,7 @@ define void @memset_8(ptr %a, i8 %value) nounwind {
; RV64-FAST: # %bb.0:
; RV64-FAST-NEXT: zext.b a1, a1
; RV64-FAST-NEXT: lui a2, 4112
-; RV64-FAST-NEXT: addiw a2, a2, 257
+; RV64-FAST-NEXT: addi a2, a2, 257
; RV64-FAST-NEXT: slli a3, a2, 32
; RV64-FAST-NEXT: add a2, a2, a3
; RV64-FAST-NEXT: mul a1, a1, a2
@@ -205,7 +205,7 @@ define void @memset_16(ptr %a, i8 %value) nounwind {
; RV64-FAST: # %bb.0:
; RV64-FAST-NEXT: zext.b a1, a1
; RV64-FAST-NEXT: lui a2, 4112
-; RV64-FAST-NEXT: addiw a2, a2, 257
+; RV64-FAST-NEXT: addi a2, a2, 257
; RV64-FAST-NEXT: slli a3, a2, 32
; RV64-FAST-NEXT: add a2, a2, a3
; RV64-FAST-NEXT: mul a1, a1, a2
@@ -309,7 +309,7 @@ define void @memset_32(ptr %a, i8 %value) nounwind {
; RV64-FAST: # %bb.0:
; RV64-FAST-NEXT: zext.b a1, a1
; RV64-FAST-NEXT: lui a2, 4112
-; RV64-FAST-NEXT: addiw a2, a2, 257
+; RV64-FAST-NEXT: addi a2, a2, 257
; RV64-FAST-NEXT: slli a3, a2, 32
; RV64-FAST-NEXT: add a2, a2, a3
; RV64-FAST-NEXT: mul a1, a1, a2
@@ -487,7 +487,7 @@ define void @memset_64(ptr %a, i8 %value) nounwind {
; RV64-FAST: # %bb.0:
; RV64-FAST-NEXT: zext.b a1, a1
; RV64-FAST-NEXT: lui a2, 4112
-; RV64-FAST-NEXT: addiw a2, a2, 257
+; RV64-FAST-NEXT: addi a2, a2, 257
; RV64-FAST-NEXT: slli a3, a2, 32
; RV64-FAST-NEXT: add a2, a2, a3
; RV64-FAST-NEXT: mul a1, a1, a2
@@ -564,7 +564,7 @@ define void @aligned_memset_8(ptr align 8 %a, i8 %value) nounwind {
; RV64-BOTH: # %bb.0:
; RV64-BOTH-NEXT: zext.b a1, a1
; RV64-BOTH-NEXT: lui a2, 4112
-; RV64-BOTH-NEXT: addiw a2, a2, 257
+; RV64-BOTH-NEXT: addi a2, a2, 257
; RV64-BOTH-NEXT: slli a3, a2, 32
; RV64-BOTH-NEXT: add a2, a2, a3
; RV64-BOTH-NEXT: mul a1, a1, a2
@@ -591,7 +591,7 @@ define void @aligned_memset_16(ptr align 16 %a, i8 %value) nounwind {
; RV64-BOTH: # %bb.0:
; RV64-BOTH-NEXT: zext.b a1, a1
; RV64-BOTH-NEXT: lui a2, 4112
-; RV64-BOTH-NEXT: addiw a2, a2, 257
+; RV64-BOTH-NEXT: addi a2, a2, 257
; RV64-BOTH-NEXT: slli a3, a2, 32
; RV64-BOTH-NEXT: add a2, a2, a3
; RV64-BOTH-NEXT: mul a1, a1, a2
@@ -623,7 +623,7 @@ define void @aligned_memset_32(ptr align 32 %a, i8 %value) nounwind {
; RV64-BOTH: # %bb.0:
; RV64-BOTH-NEXT: zext.b a1, a1
; RV64-BOTH-NEXT: lui a2, 4112
-; RV64-BOTH-NEXT: addiw a2, a2, 257
+; RV64-BOTH-NEXT: addi a2, a2, 257
; RV64-BOTH-NEXT: slli a3, a2, 32
; RV64-BOTH-NEXT: add a2, a2, a3
; RV64-BOTH-NEXT: mul a1, a1, a2
@@ -665,7 +665,7 @@ define void @aligned_memset_64(ptr align 64 %a, i8 %value) nounwind {
; RV64-BOTH: # %bb.0:
; RV64-BOTH-NEXT: zext.b a1, a1
; RV64-BOTH-NEXT: lui a2, 4112
-; RV64-BOTH-NEXT: addiw a2, a2, 257
+; RV64-BOTH-NEXT: addi a2, a2, 257
; RV64-BOTH-NEXT: slli a3, a2, 32
; RV64-BOTH-NEXT: add a2, a2, a3
; RV64-BOTH-NEXT: mul a1, a1, a2
diff --git a/llvm/test/CodeGen/RISCV/narrow-shl-cst.ll b/llvm/test/CodeGen/RISCV/narrow-shl-cst.ll
index 64d2caabede04..f709c8d932f5b 100644
--- a/llvm/test/CodeGen/RISCV/narrow-shl-cst.ll
+++ b/llvm/test/CodeGen/RISCV/narrow-shl-cst.ll
@@ -246,7 +246,7 @@ define signext i32 @test14(i32 signext %x) nounwind {
; RV64: # %bb.0:
; RV64-NEXT: slliw a0, a0, 10
; RV64-NEXT: lui a1, 8
-; RV64-NEXT: addiw a1, a1, -1027
+; RV64-NEXT: addi a1, a1, -1027
; RV64-NEXT: or a0, a0, a1
; RV64-NEXT: ret
%or = shl i32 %x, 10
@@ -269,7 +269,7 @@ define signext i32 @test15(i32 signext %x) nounwind {
; RV64: # %bb.0:
; RV64-NEXT: slliw a0, a0, 10
; RV64-NEXT: lui a1, 8
-; RV64-NEXT: addiw a1, a1, -515
+; RV64-NEXT: addi a1, a1, -515
; RV64-NEXT: xor a0, a0, a1
; RV64-NEXT: ret
%xor = shl i32 %x, 10
diff --git a/llvm/test/CodeGen/RISCV/out-of-reach-emergency-slot.mir b/llvm/test/CodeGen/RISCV/out-of-reach-emergency-slot.mir
index a3a1818993f0b..5f6f3bf818d63 100644
--- a/llvm/test/CodeGen/RISCV/out-of-reach-emergency-slot.mir
+++ b/llvm/test/CodeGen/RISCV/out-of-reach-emergency-slot.mir
@@ -28,18 +28,18 @@
; CHECK-NEXT: sd ra, 2024(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 2016(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 2032
- ; CHECK-NEXT: sd a0, 0(sp)
+ ; CHECK-NEXT: sd a0, 0(sp) # 8-byte Folded Spill
; CHECK-NEXT: lui a0, 2
- ; CHECK-NEXT: addiw a0, a0, -2032
+ ; CHECK-NEXT: addi a0, a0, -2032
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: srli a0, sp, 12
; CHECK-NEXT: slli sp, a0, 12
- ; CHECK-NEXT: ld a0, 0(sp)
- ; CHECK-NEXT: sd a1, 0(sp)
+ ; CHECK-NEXT: ld a0, 0(sp) # 8-byte Folded Reload
+ ; CHECK-NEXT: sd a1, 0(sp) # 8-byte Folded Spill
; CHECK-NEXT: lui a1, 1
; CHECK-NEXT: add a1, sp, a1
; CHECK-NEXT: sd a0, -8(a1)
- ; CHECK-NEXT: ld a1, 0(sp)
+ ; CHECK-NEXT: ld a1, 0(sp) # 8-byte Folded Reload
; CHECK-NEXT: call foo
; CHECK-NEXT: addi sp, s0, -2032
; CHECK-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/overflow-intrinsics.ll b/llvm/test/CodeGen/RISCV/overflow-intrinsics.ll
index a5426e560bd65..98c897084ab49 100644
--- a/llvm/test/CodeGen/RISCV/overflow-intrinsics.ll
+++ b/llvm/test/CodeGen/RISCV/overflow-intrinsics.ll
@@ -1330,7 +1330,7 @@ define i16 @overflow_not_used(i16 %a, i16 %b, ptr %res) {
; RV64: # %bb.0:
; RV64-NEXT: lui a3, 16
; RV64-NEXT: add a0, a1, a0
-; RV64-NEXT: addiw a3, a3, -1
+; RV64-NEXT: addi a3, a3, -1
; RV64-NEXT: and a4, a1, a3
; RV64-NEXT: and a3, a0, a3
; RV64-NEXT: bltu a3, a4, .LBB38_2
diff --git a/llvm/test/CodeGen/RISCV/pr135206.ll b/llvm/test/CodeGen/RISCV/pr135206.ll
index 196e78d8ed8b9..75b11c373895b 100644
--- a/llvm/test/CodeGen/RISCV/pr135206.ll
+++ b/llvm/test/CodeGen/RISCV/pr135206.ll
@@ -42,10 +42,10 @@ define i1 @foo() nounwind "probe-stack"="inline-asm" "target-features"="+v" {
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vmv.v.x v8, a0
; CHECK-NEXT: lui a0, 8
-; CHECK-NEXT: addiw a0, a0, 32
+; CHECK-NEXT: addi a0, a0, 32
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: vs1r.v v8, (a0) # vscale x 8-byte Folded Spill
-; CHECK-NEXT: addiw a0, a1, 1622
+; CHECK-NEXT: addi a0, a1, 1622
; CHECK-NEXT: vse8.v v8, (s0)
; CHECK-NEXT: vse8.v v8, (s1)
; CHECK-NEXT: vse8.v v8, (s2)
@@ -54,7 +54,7 @@ define i1 @foo() nounwind "probe-stack"="inline-asm" "target-features"="+v" {
; CHECK-NEXT: sd s3, 64(sp)
; CHECK-NEXT: call bar
; CHECK-NEXT: lui a0, 8
-; CHECK-NEXT: addiw a0, a0, 32
+; CHECK-NEXT: addi a0, a0, 32
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: vl1r.v v8, (a0) # vscale x 8-byte Folded Reload
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
@@ -66,7 +66,7 @@ define i1 @foo() nounwind "probe-stack"="inline-asm" "target-features"="+v" {
; CHECK-NEXT: csrr a1, vlenb
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: lui a1, 8
-; CHECK-NEXT: addiw a1, a1, -1952
+; CHECK-NEXT: addi a1, a1, -1952
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s0, 2016(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/pr56457.ll b/llvm/test/CodeGen/RISCV/pr56457.ll
index cf518b31a190b..0dca858089167 100644
--- a/llvm/test/CodeGen/RISCV/pr56457.ll
+++ b/llvm/test/CodeGen/RISCV/pr56457.ll
@@ -14,9 +14,9 @@ define i15 @foo(i15 %x) nounwind {
; CHECK-NEXT: lui a3, 209715
; CHECK-NEXT: lui a4, 61681
; CHECK-NEXT: or a0, a0, a1
-; CHECK-NEXT: addiw a1, a2, 1365
-; CHECK-NEXT: addiw a2, a3, 819
-; CHECK-NEXT: addiw a3, a4, -241
+; CHECK-NEXT: addi a1, a2, 1365
+; CHECK-NEXT: addi a2, a3, 819
+; CHECK-NEXT: addi a3, a4, -241
; CHECK-NEXT: slli a4, a2, 32
; CHECK-NEXT: add a2, a2, a4
; CHECK-NEXT: slli a4, a3, 32
@@ -43,7 +43,7 @@ define i15 @foo(i15 %x) nounwind {
; CHECK-NEXT: srli a1, a0, 4
; CHECK-NEXT: add a0, a0, a1
; CHECK-NEXT: lui a1, 4112
-; CHECK-NEXT: addiw a1, a1, 257
+; CHECK-NEXT: addi a1, a1, 257
; CHECK-NEXT: and a0, a0, a3
; CHECK-NEXT: slli a2, a1, 32
; CHECK-NEXT: add a1, a1, a2
diff --git a/llvm/test/CodeGen/RISCV/pr58286.ll b/llvm/test/CodeGen/RISCV/pr58286.ll
index 5e146b4b12ca2..ec53646aca62c 100644
--- a/llvm/test/CodeGen/RISCV/pr58286.ll
+++ b/llvm/test/CodeGen/RISCV/pr58286.ll
@@ -7,7 +7,7 @@ define void @func() {
; RV64I-LABEL: func:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 4112
; RV64I-NEXT: lui a0, %hi(var)
@@ -25,11 +25,11 @@ define void @func() {
; RV64I-NEXT: lw t4, %lo(var)(a0)
; RV64I-NEXT: lw t5, %lo(var)(a0)
; RV64I-NEXT: lw t6, %lo(var)(a0)
-; RV64I-NEXT: sd s0, 0(sp)
+; RV64I-NEXT: sd s0, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui s0, 1
; RV64I-NEXT: add s0, sp, s0
; RV64I-NEXT: sw a1, 12(s0)
-; RV64I-NEXT: ld s0, 0(sp)
+; RV64I-NEXT: ld s0, 0(sp) # 8-byte Folded Reload
; RV64I-NEXT: sw a1, %lo(var)(a0)
; RV64I-NEXT: sw a2, %lo(var)(a0)
; RV64I-NEXT: sw a3, %lo(var)(a0)
@@ -45,7 +45,7 @@ define void @func() {
; RV64I-NEXT: sw t5, %lo(var)(a0)
; RV64I-NEXT: sw t6, %lo(var)(a0)
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -71,11 +71,11 @@ define void @func() {
; RV32I-NEXT: lw t4, %lo(var)(a0)
; RV32I-NEXT: lw t5, %lo(var)(a0)
; RV32I-NEXT: lw t6, %lo(var)(a0)
-; RV32I-NEXT: sw s0, 0(sp)
+; RV32I-NEXT: sw s0, 0(sp) # 4-byte Folded Spill
; RV32I-NEXT: lui s0, 1
; RV32I-NEXT: add s0, sp, s0
; RV32I-NEXT: sw a1, 12(s0)
-; RV32I-NEXT: lw s0, 0(sp)
+; RV32I-NEXT: lw s0, 0(sp) # 4-byte Folded Reload
; RV32I-NEXT: sw a1, %lo(var)(a0)
; RV32I-NEXT: sw a2, %lo(var)(a0)
; RV32I-NEXT: sw a3, %lo(var)(a0)
@@ -142,7 +142,7 @@ define void @shrink_wrap(i1 %c) {
; RV64I-NEXT: bnez a0, .LBB1_2
; RV64I-NEXT: # %bb.1: # %bar
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 4112
; RV64I-NEXT: lui a0, %hi(var)
@@ -160,11 +160,11 @@ define void @shrink_wrap(i1 %c) {
; RV64I-NEXT: lw t4, %lo(var)(a0)
; RV64I-NEXT: lw t5, %lo(var)(a0)
; RV64I-NEXT: lw t6, %lo(var)(a0)
-; RV64I-NEXT: sd s0, 0(sp)
+; RV64I-NEXT: sd s0, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui s0, 1
; RV64I-NEXT: add s0, sp, s0
; RV64I-NEXT: sw a1, 12(s0)
-; RV64I-NEXT: ld s0, 0(sp)
+; RV64I-NEXT: ld s0, 0(sp) # 8-byte Folded Reload
; RV64I-NEXT: sw a1, %lo(var)(a0)
; RV64I-NEXT: sw a2, %lo(var)(a0)
; RV64I-NEXT: sw a3, %lo(var)(a0)
@@ -180,7 +180,7 @@ define void @shrink_wrap(i1 %c) {
; RV64I-NEXT: sw t5, %lo(var)(a0)
; RV64I-NEXT: sw t6, %lo(var)(a0)
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: .LBB1_2: # %foo
@@ -210,11 +210,11 @@ define void @shrink_wrap(i1 %c) {
; RV32I-NEXT: lw t4, %lo(var)(a0)
; RV32I-NEXT: lw t5, %lo(var)(a0)
; RV32I-NEXT: lw t6, %lo(var)(a0)
-; RV32I-NEXT: sw s0, 0(sp)
+; RV32I-NEXT: sw s0, 0(sp) # 4-byte Folded Spill
; RV32I-NEXT: lui s0, 1
; RV32I-NEXT: add s0, sp, s0
; RV32I-NEXT: sw a1, 12(s0)
-; RV32I-NEXT: lw s0, 0(sp)
+; RV32I-NEXT: lw s0, 0(sp) # 4-byte Folded Reload
; RV32I-NEXT: sw a1, %lo(var)(a0)
; RV32I-NEXT: sw a2, %lo(var)(a0)
; RV32I-NEXT: sw a3, %lo(var)(a0)
diff --git a/llvm/test/CodeGen/RISCV/pr58511.ll b/llvm/test/CodeGen/RISCV/pr58511.ll
index e5cba679729fa..a93fef20f5fea 100644
--- a/llvm/test/CodeGen/RISCV/pr58511.ll
+++ b/llvm/test/CodeGen/RISCV/pr58511.ll
@@ -7,7 +7,7 @@ define i32 @f(i1 %0, i32 %1, ptr %2) {
; CHECK-NEXT: slli a0, a0, 63
; CHECK-NEXT: lui a3, 4097
; CHECK-NEXT: srai a0, a0, 63
-; CHECK-NEXT: addiw a3, a3, -2047
+; CHECK-NEXT: addi a3, a3, -2047
; CHECK-NEXT: or a0, a0, a3
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sw a1, 0(a2)
@@ -26,7 +26,7 @@ define i32 @g(i1 %0, i32 %1, ptr %2) {
; CHECK-NEXT: andi a0, a0, 1
; CHECK-NEXT: lui a3, 4097
; CHECK-NEXT: addi a0, a0, -1
-; CHECK-NEXT: addiw a3, a3, -2047
+; CHECK-NEXT: addi a3, a3, -2047
; CHECK-NEXT: or a0, a0, a3
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sw a1, 0(a2)
@@ -44,7 +44,7 @@ define i32 @h(i1 %0, i32 %1, ptr %2) {
; CHECK: # %bb.0: # %BB
; CHECK-NEXT: lui a3, 4097
; CHECK-NEXT: slli a0, a0, 63
-; CHECK-NEXT: addiw a3, a3, -2047
+; CHECK-NEXT: addi a3, a3, -2047
; CHECK-NEXT: srai a0, a0, 63
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: and a0, a0, a3
@@ -63,7 +63,7 @@ define i32 @i(i1 %0, i32 %1, ptr %2) {
; CHECK: # %bb.0: # %BB
; CHECK-NEXT: andi a0, a0, 1
; CHECK-NEXT: lui a3, 4097
-; CHECK-NEXT: addiw a3, a3, -2047
+; CHECK-NEXT: addi a3, a3, -2047
; CHECK-NEXT: addi a0, a0, -1
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: and a0, a0, a3
diff --git a/llvm/test/CodeGen/RISCV/pr68855.ll b/llvm/test/CodeGen/RISCV/pr68855.ll
index 8031bf4f30411..90d4be4d9a0e3 100644
--- a/llvm/test/CodeGen/RISCV/pr68855.ll
+++ b/llvm/test/CodeGen/RISCV/pr68855.ll
@@ -7,7 +7,7 @@ define i16 @narrow_load(ptr %p1, ptr %p2) {
; CHECK-NEXT: lhu a2, 0(a0)
; CHECK-NEXT: lui a3, 2
; CHECK-NEXT: lui a4, 16
-; CHECK-NEXT: addiw a3, a3, -1
+; CHECK-NEXT: addi a3, a3, -1
; CHECK-NEXT: addi a4, a4, -1
; CHECK-NEXT: xor a2, a2, a3
; CHECK-NEXT: xor a4, a3, a4
diff --git a/llvm/test/CodeGen/RISCV/pr69586.ll b/llvm/test/CodeGen/RISCV/pr69586.ll
index 4ab48930ae78c..e761d3a9a75c3 100644
--- a/llvm/test/CodeGen/RISCV/pr69586.ll
+++ b/llvm/test/CodeGen/RISCV/pr69586.ll
@@ -253,7 +253,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v4, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v26, v8
; NOREMAT-NEXT: lui a4, 4
-; NOREMAT-NEXT: addiw a0, a4, 512
+; NOREMAT-NEXT: addi a0, a4, 512
; NOREMAT-NEXT: sd a0, 496(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a0, a7, a0
; NOREMAT-NEXT: vle32.v v8, (a0)
@@ -265,7 +265,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v28, (a2)
; NOREMAT-NEXT: vle32.v v6, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v30, v12
-; NOREMAT-NEXT: addiw a2, a4, 1536
+; NOREMAT-NEXT: addi a2, a4, 1536
; NOREMAT-NEXT: sd a2, 480(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v12, (a2)
@@ -278,7 +278,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v4, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v22, v8
; NOREMAT-NEXT: lui a5, 5
-; NOREMAT-NEXT: addiw a2, a5, -1536
+; NOREMAT-NEXT: addi a2, a5, -1536
; NOREMAT-NEXT: sd a2, 464(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v8, (a2)
@@ -291,13 +291,13 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v26, (a2)
; NOREMAT-NEXT: vle32.v v28, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v6, v12
-; NOREMAT-NEXT: addiw a2, a5, -512
+; NOREMAT-NEXT: addi a2, a5, -512
; NOREMAT-NEXT: sd a2, 448(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v12, (a2)
; NOREMAT-NEXT: vle32.v v6, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v30, v24
-; NOREMAT-NEXT: addiw a2, a5, 512
+; NOREMAT-NEXT: addi a2, a5, 512
; NOREMAT-NEXT: sd a2, 440(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v24, (a2)
@@ -309,7 +309,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v8, (a2)
; NOREMAT-NEXT: vle32.v v4, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v22, v26
-; NOREMAT-NEXT: addiw a2, a5, 1536
+; NOREMAT-NEXT: addi a2, a5, 1536
; NOREMAT-NEXT: sd a2, 424(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v22, (a2)
@@ -322,7 +322,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v28, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v6, v18
; NOREMAT-NEXT: lui a6, 6
-; NOREMAT-NEXT: addiw a2, a6, -1536
+; NOREMAT-NEXT: addi a2, a6, -1536
; NOREMAT-NEXT: sd a2, 408(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v18, (a2)
@@ -334,13 +334,13 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v16, (a2)
; NOREMAT-NEXT: vle32.v v24, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v30, v8
-; NOREMAT-NEXT: addiw a2, a6, -512
+; NOREMAT-NEXT: addi a2, a6, -512
; NOREMAT-NEXT: sd a2, 392(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v8, (a2)
; NOREMAT-NEXT: vle32.v v30, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v4, v22
-; NOREMAT-NEXT: addiw a2, a6, 512
+; NOREMAT-NEXT: addi a2, a6, 512
; NOREMAT-NEXT: sd a2, 384(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v22, (a2)
@@ -352,7 +352,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v26, (a2)
; NOREMAT-NEXT: vle32.v v2, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v28, v18
-; NOREMAT-NEXT: addiw a2, a6, 1536
+; NOREMAT-NEXT: addi a2, a6, 1536
; NOREMAT-NEXT: sd a2, 368(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v18, (a2)
@@ -365,7 +365,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v6, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v24, v8
; NOREMAT-NEXT: lui s0, 7
-; NOREMAT-NEXT: addiw a2, s0, -1536
+; NOREMAT-NEXT: addi a2, s0, -1536
; NOREMAT-NEXT: sd a2, 352(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v8, (a2)
@@ -379,13 +379,13 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: addi a0, sp, 640
; NOREMAT-NEXT: vl2r.v v12, (a0) # vscale x 16-byte Folded Reload
; NOREMAT-NEXT: sf.vc.vv 3, 0, v12, v22
-; NOREMAT-NEXT: addiw a2, s0, -512
+; NOREMAT-NEXT: addi a2, s0, -512
; NOREMAT-NEXT: sd a2, 336(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v22, (a2)
; NOREMAT-NEXT: vle32.v v12, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v4, v26
-; NOREMAT-NEXT: addiw a2, s0, 512
+; NOREMAT-NEXT: addi a2, s0, 512
; NOREMAT-NEXT: sd a2, 328(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: lui t3, 7
; NOREMAT-NEXT: add a2, a7, a2
@@ -398,7 +398,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v18, (a2)
; NOREMAT-NEXT: vle32.v v2, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v28, v16
-; NOREMAT-NEXT: addiw a2, t3, 1536
+; NOREMAT-NEXT: addi a2, t3, 1536
; NOREMAT-NEXT: sd a2, 312(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v16, (a2)
@@ -410,7 +410,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vle32.v v8, (a2)
; NOREMAT-NEXT: vle32.v v6, (a2)
; NOREMAT-NEXT: sf.vc.vv 3, 0, v24, v14
-; NOREMAT-NEXT: addiw a2, t4, -1536
+; NOREMAT-NEXT: addi a2, t4, -1536
; NOREMAT-NEXT: sd a2, 296(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v14, (a2)
@@ -421,7 +421,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: add a2, a7, a2
; NOREMAT-NEXT: vle32.v v22, (a2)
; NOREMAT-NEXT: vle32.v v30, (a2)
-; NOREMAT-NEXT: addiw a0, t4, -512
+; NOREMAT-NEXT: addi a0, t4, -512
; NOREMAT-NEXT: sd a0, 280(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a0, a7, a0
; NOREMAT-NEXT: sf.vc.vv 3, 0, v12, v0
@@ -456,34 +456,34 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: sd t3, 224(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a0, a1, t4
; NOREMAT-NEXT: sd a0, 216(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw a0, t4, 512
+; NOREMAT-NEXT: addi a0, t4, 512
; NOREMAT-NEXT: sd a0, 192(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw a0, t4, 1024
+; NOREMAT-NEXT: addi a0, t4, 1024
; NOREMAT-NEXT: sd a0, 176(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw a0, t4, 1536
+; NOREMAT-NEXT: addi a0, t4, 1536
; NOREMAT-NEXT: sd a0, 160(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: slli s1, s1, 11
; NOREMAT-NEXT: sd s1, 128(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: lui a0, 9
-; NOREMAT-NEXT: addiw a2, a0, -1536
+; NOREMAT-NEXT: addi a2, a0, -1536
; NOREMAT-NEXT: sd a2, 88(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw a2, a0, -1024
+; NOREMAT-NEXT: addi a2, a0, -1024
; NOREMAT-NEXT: sd a2, 72(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw a2, a0, -512
+; NOREMAT-NEXT: addi a2, a0, -512
; NOREMAT-NEXT: sd a2, 40(sp) # 8-byte Folded Spill
; NOREMAT-NEXT: add a2, a1, a0
; NOREMAT-NEXT: sd a2, 208(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw s11, a0, 512
-; NOREMAT-NEXT: addiw s7, a0, 1024
-; NOREMAT-NEXT: addiw s3, a0, 1536
+; NOREMAT-NEXT: addi s11, a0, 512
+; NOREMAT-NEXT: addi s7, a0, 1024
+; NOREMAT-NEXT: addi s3, a0, 1536
; NOREMAT-NEXT: slli s1, t2, 11
; NOREMAT-NEXT: lui a0, 10
-; NOREMAT-NEXT: addiw t2, a0, -1536
-; NOREMAT-NEXT: addiw a7, a0, -1024
-; NOREMAT-NEXT: addiw a4, a0, -512
+; NOREMAT-NEXT: addi t2, a0, -1536
+; NOREMAT-NEXT: addi a7, a0, -1024
+; NOREMAT-NEXT: addi a4, a0, -512
; NOREMAT-NEXT: add a2, a1, a0
; NOREMAT-NEXT: sd a2, 200(sp) # 8-byte Folded Spill
-; NOREMAT-NEXT: addiw a0, a0, 512
+; NOREMAT-NEXT: addi a0, a0, 512
; NOREMAT-NEXT: ld a2, 512(sp) # 8-byte Folded Reload
; NOREMAT-NEXT: add a2, a1, a2
; NOREMAT-NEXT: ld a3, 504(sp) # 8-byte Folded Reload
@@ -1195,7 +1195,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: addi a2, a2, 432
; REMAT-NEXT: vs2r.v v8, (a2) # vscale x 16-byte Folded Spill
; REMAT-NEXT: lui a2, 4
-; REMAT-NEXT: addiw a2, a2, 512
+; REMAT-NEXT: addi a2, a2, 512
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v2, (a2)
; REMAT-NEXT: csrr a3, vlenb
@@ -1217,7 +1217,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v8, v14
; REMAT-NEXT: vle32.v v22, (a2)
; REMAT-NEXT: lui a2, 4
-; REMAT-NEXT: addiw a2, a2, 1536
+; REMAT-NEXT: addi a2, a2, 1536
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v24, (a2)
; REMAT-NEXT: csrr a3, vlenb
@@ -1242,7 +1242,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v10, v18
; REMAT-NEXT: vle32.v v10, (a2)
; REMAT-NEXT: lui a2, 5
-; REMAT-NEXT: addiw a2, a2, -1536
+; REMAT-NEXT: addi a2, a2, -1536
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v28, (a2)
; REMAT-NEXT: csrr a3, vlenb
@@ -1266,7 +1266,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v14, v6
; REMAT-NEXT: vle32.v v14, (a2)
; REMAT-NEXT: lui a2, 5
-; REMAT-NEXT: addiw a2, a2, -512
+; REMAT-NEXT: addi a2, a2, -512
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v6, (a2)
; REMAT-NEXT: csrr a3, vlenb
@@ -1288,7 +1288,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v18, v2
; REMAT-NEXT: vle32.v v18, (a2)
; REMAT-NEXT: lui a2, 5
-; REMAT-NEXT: addiw a2, a2, 512
+; REMAT-NEXT: addi a2, a2, 512
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v2, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v20, v0
@@ -1300,7 +1300,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v22, v24
; REMAT-NEXT: vle32.v v22, (a2)
; REMAT-NEXT: lui s4, 5
-; REMAT-NEXT: addiw s4, s4, 1536
+; REMAT-NEXT: addi s4, s4, 1536
; REMAT-NEXT: add a2, a0, s4
; REMAT-NEXT: vle32.v v24, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v8, v26
@@ -1312,7 +1312,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v10, v28
; REMAT-NEXT: vle32.v v10, (a2)
; REMAT-NEXT: lui s3, 6
-; REMAT-NEXT: addiw s3, s3, -1536
+; REMAT-NEXT: addi s3, s3, -1536
; REMAT-NEXT: add a2, a0, s3
; REMAT-NEXT: vle32.v v28, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v12, v30
@@ -1324,7 +1324,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v14, v6
; REMAT-NEXT: vle32.v v14, (a2)
; REMAT-NEXT: lui a2, 6
-; REMAT-NEXT: addiw a2, a2, -512
+; REMAT-NEXT: addi a2, a2, -512
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v6, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v16, v4
@@ -1336,7 +1336,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v18, v2
; REMAT-NEXT: vle32.v v18, (a2)
; REMAT-NEXT: lui s0, 6
-; REMAT-NEXT: addiw s0, s0, 512
+; REMAT-NEXT: addi s0, s0, 512
; REMAT-NEXT: add a2, a0, s0
; REMAT-NEXT: vle32.v v2, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v20, v0
@@ -1348,7 +1348,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v22, v24
; REMAT-NEXT: vle32.v v22, (a2)
; REMAT-NEXT: lui t6, 6
-; REMAT-NEXT: addiw t6, t6, 1536
+; REMAT-NEXT: addi t6, t6, 1536
; REMAT-NEXT: add a2, a0, t6
; REMAT-NEXT: vle32.v v24, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v8, v26
@@ -1360,7 +1360,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v10, v28
; REMAT-NEXT: vle32.v v10, (a2)
; REMAT-NEXT: lui a2, 7
-; REMAT-NEXT: addiw a2, a2, -1536
+; REMAT-NEXT: addi a2, a2, -1536
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v28, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v12, v30
@@ -1372,7 +1372,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v14, v6
; REMAT-NEXT: vle32.v v14, (a2)
; REMAT-NEXT: lui a2, 7
-; REMAT-NEXT: addiw a2, a2, -512
+; REMAT-NEXT: addi a2, a2, -512
; REMAT-NEXT: add a2, a0, a2
; REMAT-NEXT: vle32.v v6, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v16, v4
@@ -1384,7 +1384,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v18, v2
; REMAT-NEXT: vle32.v v18, (a2)
; REMAT-NEXT: lui t2, 7
-; REMAT-NEXT: addiw t2, t2, 512
+; REMAT-NEXT: addi t2, t2, 512
; REMAT-NEXT: add a2, a0, t2
; REMAT-NEXT: vle32.v v2, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v20, v0
@@ -1396,7 +1396,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v22, v24
; REMAT-NEXT: vle32.v v22, (a2)
; REMAT-NEXT: lui t0, 7
-; REMAT-NEXT: addiw t0, t0, 1536
+; REMAT-NEXT: addi t0, t0, 1536
; REMAT-NEXT: add a2, a0, t0
; REMAT-NEXT: vle32.v v24, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v8, v26
@@ -1408,7 +1408,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v10, v28
; REMAT-NEXT: vle32.v v10, (a2)
; REMAT-NEXT: lui a6, 8
-; REMAT-NEXT: addiw a6, a6, -1536
+; REMAT-NEXT: addi a6, a6, -1536
; REMAT-NEXT: add a2, a0, a6
; REMAT-NEXT: vle32.v v28, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v12, v30
@@ -1420,7 +1420,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: sf.vc.vv 3, 0, v14, v6
; REMAT-NEXT: vle32.v v14, (a2)
; REMAT-NEXT: lui a3, 8
-; REMAT-NEXT: addiw a3, a3, -512
+; REMAT-NEXT: addi a3, a3, -512
; REMAT-NEXT: add a2, a0, a3
; REMAT-NEXT: vle32.v v6, (a2)
; REMAT-NEXT: sf.vc.vv 3, 0, v16, v4
@@ -1537,7 +1537,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 192(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 4
-; REMAT-NEXT: addiw a0, a0, 512
+; REMAT-NEXT: addi a0, a0, 512
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 184(sp) # 8-byte Folded Spill
; REMAT-NEXT: li a0, 17
@@ -1545,7 +1545,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 176(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 4
-; REMAT-NEXT: addiw a0, a0, 1536
+; REMAT-NEXT: addi a0, a0, 1536
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 168(sp) # 8-byte Folded Spill
; REMAT-NEXT: li a0, 9
@@ -1553,7 +1553,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 160(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 5
-; REMAT-NEXT: addiw a0, a0, -1536
+; REMAT-NEXT: addi a0, a0, -1536
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 152(sp) # 8-byte Folded Spill
; REMAT-NEXT: li a0, 19
@@ -1561,14 +1561,14 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 144(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 5
-; REMAT-NEXT: addiw a0, a0, -512
+; REMAT-NEXT: addi a0, a0, -512
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 136(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 5
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 128(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 5
-; REMAT-NEXT: addiw a0, a0, 512
+; REMAT-NEXT: addi a0, a0, 512
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 120(sp) # 8-byte Folded Spill
; REMAT-NEXT: add s7, a1, s7
@@ -1584,7 +1584,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add s2, a1, s2
; REMAT-NEXT: sd s2, 80(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 6
-; REMAT-NEXT: addiw a0, a0, -512
+; REMAT-NEXT: addi a0, a0, -512
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 72(sp) # 8-byte Folded Spill
; REMAT-NEXT: add s1, a1, s1
@@ -1600,13 +1600,13 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add t5, a1, t5
; REMAT-NEXT: sd t5, 32(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui a0, 7
-; REMAT-NEXT: addiw a0, a0, -1536
+; REMAT-NEXT: addi a0, a0, -1536
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: sd a0, 24(sp) # 8-byte Folded Spill
; REMAT-NEXT: add t4, a1, t4
; REMAT-NEXT: sd t4, 16(sp) # 8-byte Folded Spill
; REMAT-NEXT: lui ra, 7
-; REMAT-NEXT: addiw ra, ra, -512
+; REMAT-NEXT: addi ra, ra, -512
; REMAT-NEXT: add ra, a1, ra
; REMAT-NEXT: add s11, a1, t3
; REMAT-NEXT: add s10, a1, t2
@@ -1618,53 +1618,53 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; REMAT-NEXT: add s4, a1, a3
; REMAT-NEXT: add s3, a1, a2
; REMAT-NEXT: lui s2, 8
-; REMAT-NEXT: addiw s2, s2, 512
+; REMAT-NEXT: addi s2, s2, 512
; REMAT-NEXT: add s2, a1, s2
; REMAT-NEXT: lui s1, 8
-; REMAT-NEXT: addiw s1, s1, 1024
+; REMAT-NEXT: addi s1, s1, 1024
; REMAT-NEXT: add s1, a1, s1
; REMAT-NEXT: lui s0, 8
-; REMAT-NEXT: addiw s0, s0, 1536
+; REMAT-NEXT: addi s0, s0, 1536
; REMAT-NEXT: add s0, a1, s0
; REMAT-NEXT: li t6, 17
; REMAT-NEXT: slli t6, t6, 11
; REMAT-NEXT: add t6, a1, t6
; REMAT-NEXT: lui t5, 9
-; REMAT-NEXT: addiw t5, t5, -1536
+; REMAT-NEXT: addi t5, t5, -1536
; REMAT-NEXT: add t5, a1, t5
; REMAT-NEXT: lui t4, 9
-; REMAT-NEXT: addiw t4, t4, -1024
+; REMAT-NEXT: addi t4, t4, -1024
; REMAT-NEXT: add t4, a1, t4
; REMAT-NEXT: lui t3, 9
-; REMAT-NEXT: addiw t3, t3, -512
+; REMAT-NEXT: addi t3, t3, -512
; REMAT-NEXT: add t3, a1, t3
; REMAT-NEXT: lui t2, 9
; REMAT-NEXT: add t2, a1, t2
; REMAT-NEXT: lui t1, 9
-; REMAT-NEXT: addiw t1, t1, 512
+; REMAT-NEXT: addi t1, t1, 512
; REMAT-NEXT: add t1, a1, t1
; REMAT-NEXT: lui t0, 9
-; REMAT-NEXT: addiw t0, t0, 1024
+; REMAT-NEXT: addi t0, t0, 1024
; REMAT-NEXT: add t0, a1, t0
; REMAT-NEXT: lui a7, 9
-; REMAT-NEXT: addiw a7, a7, 1536
+; REMAT-NEXT: addi a7, a7, 1536
; REMAT-NEXT: add a7, a1, a7
; REMAT-NEXT: li a6, 19
; REMAT-NEXT: slli a6, a6, 11
; REMAT-NEXT: add a6, a1, a6
; REMAT-NEXT: lui a5, 10
-; REMAT-NEXT: addiw a5, a5, -1536
+; REMAT-NEXT: addi a5, a5, -1536
; REMAT-NEXT: add a5, a1, a5
; REMAT-NEXT: lui a4, 10
-; REMAT-NEXT: addiw a4, a4, -1024
+; REMAT-NEXT: addi a4, a4, -1024
; REMAT-NEXT: add a4, a1, a4
; REMAT-NEXT: lui a3, 10
-; REMAT-NEXT: addiw a3, a3, -512
+; REMAT-NEXT: addi a3, a3, -512
; REMAT-NEXT: add a3, a1, a3
; REMAT-NEXT: lui a2, 10
; REMAT-NEXT: add a2, a1, a2
; REMAT-NEXT: lui a0, 10
-; REMAT-NEXT: addiw a0, a0, 512
+; REMAT-NEXT: addi a0, a0, 512
; REMAT-NEXT: add a0, a1, a0
; REMAT-NEXT: addi a1, a1, 1536
; REMAT-NEXT: sf.vc.v.i 2, 0, v8, 0
diff --git a/llvm/test/CodeGen/RISCV/pr90730.ll b/llvm/test/CodeGen/RISCV/pr90730.ll
index 7c3f4b43089cb..904761845c4d8 100644
--- a/llvm/test/CodeGen/RISCV/pr90730.ll
+++ b/llvm/test/CodeGen/RISCV/pr90730.ll
@@ -5,7 +5,7 @@ define i32 @pr90730(i32 %x, i1 %y, ptr %p) {
; CHECK-LABEL: pr90730:
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: lui a1, 8
-; CHECK-NEXT: addiw a1, a1, -960
+; CHECK-NEXT: addi a1, a1, -960
; CHECK-NEXT: andn a0, a1, a0
; CHECK-NEXT: sw zero, 0(a2)
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/pr95271.ll b/llvm/test/CodeGen/RISCV/pr95271.ll
index aa941cb803627..fd69e85d0b760 100644
--- a/llvm/test/CodeGen/RISCV/pr95271.ll
+++ b/llvm/test/CodeGen/RISCV/pr95271.ll
@@ -34,12 +34,12 @@ define i32 @PR95271(ptr %p) {
; RV64I: # %bb.0:
; RV64I-NEXT: lw a0, 0(a0)
; RV64I-NEXT: lui a1, 349525
-; RV64I-NEXT: addiw a1, a1, 1365
+; RV64I-NEXT: addi a1, a1, 1365
; RV64I-NEXT: addi a2, a0, 1
; RV64I-NEXT: srli a2, a2, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: addiw a0, a0, 1
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
diff --git a/llvm/test/CodeGen/RISCV/prefer-w-inst.ll b/llvm/test/CodeGen/RISCV/prefer-w-inst.ll
index 34ab74d78a76f..dcc4f04340055 100644
--- a/llvm/test/CodeGen/RISCV/prefer-w-inst.ll
+++ b/llvm/test/CodeGen/RISCV/prefer-w-inst.ll
@@ -17,7 +17,7 @@ define i32 @addiw(i32 %a) {
; NO-STRIP-LABEL: addiw:
; NO-STRIP: # %bb.0:
; NO-STRIP-NEXT: lui a1, 1
-; NO-STRIP-NEXT: addiw a1, a1, -1
+; NO-STRIP-NEXT: addi a1, a1, -1
; NO-STRIP-NEXT: addw a0, a0, a1
; NO-STRIP-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/prefetch.ll b/llvm/test/CodeGen/RISCV/prefetch.ll
index e4c9b91b0dbbd..bc46c60c053f3 100644
--- a/llvm/test/CodeGen/RISCV/prefetch.ll
+++ b/llvm/test/CodeGen/RISCV/prefetch.ll
@@ -715,10 +715,10 @@ define void @test_prefetch_frameindex_1() nounwind {
; RV64I-LABEL: test_prefetch_frameindex_1:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: ret
;
@@ -737,25 +737,25 @@ define void @test_prefetch_frameindex_1() nounwind {
; RV64ZICBOP-LABEL: test_prefetch_frameindex_1:
; RV64ZICBOP: # %bb.0:
; RV64ZICBOP-NEXT: lui a0, 1
-; RV64ZICBOP-NEXT: addiw a0, a0, 16
+; RV64ZICBOP-NEXT: addi a0, a0, 16
; RV64ZICBOP-NEXT: sub sp, sp, a0
; RV64ZICBOP-NEXT: addi a0, sp, 16
; RV64ZICBOP-NEXT: prefetch.r 0(a0)
; RV64ZICBOP-NEXT: lui a0, 1
-; RV64ZICBOP-NEXT: addiw a0, a0, 16
+; RV64ZICBOP-NEXT: addi a0, a0, 16
; RV64ZICBOP-NEXT: add sp, sp, a0
; RV64ZICBOP-NEXT: ret
;
; RV64ZICBOPZIHINTNTL-LABEL: test_prefetch_frameindex_1:
; RV64ZICBOPZIHINTNTL: # %bb.0:
; RV64ZICBOPZIHINTNTL-NEXT: lui a0, 1
-; RV64ZICBOPZIHINTNTL-NEXT: addiw a0, a0, 16
+; RV64ZICBOPZIHINTNTL-NEXT: addi a0, a0, 16
; RV64ZICBOPZIHINTNTL-NEXT: sub sp, sp, a0
; RV64ZICBOPZIHINTNTL-NEXT: addi a0, sp, 16
; RV64ZICBOPZIHINTNTL-NEXT: ntl.all
; RV64ZICBOPZIHINTNTL-NEXT: prefetch.r 0(a0)
; RV64ZICBOPZIHINTNTL-NEXT: lui a0, 1
-; RV64ZICBOPZIHINTNTL-NEXT: addiw a0, a0, 16
+; RV64ZICBOPZIHINTNTL-NEXT: addi a0, a0, 16
; RV64ZICBOPZIHINTNTL-NEXT: add sp, sp, a0
; RV64ZICBOPZIHINTNTL-NEXT: ret
%data = alloca [1024 x i32], align 4
@@ -1158,14 +1158,14 @@ define void @test_prefetch_constant_address_1() nounwind {
; RV64ZICBOP-LABEL: test_prefetch_constant_address_1:
; RV64ZICBOP: # %bb.0:
; RV64ZICBOP-NEXT: lui a0, 1
-; RV64ZICBOP-NEXT: addiw a0, a0, 31
+; RV64ZICBOP-NEXT: addi a0, a0, 31
; RV64ZICBOP-NEXT: prefetch.r 0(a0)
; RV64ZICBOP-NEXT: ret
;
; RV64ZICBOPZIHINTNTL-LABEL: test_prefetch_constant_address_1:
; RV64ZICBOPZIHINTNTL: # %bb.0:
; RV64ZICBOPZIHINTNTL-NEXT: lui a0, 1
-; RV64ZICBOPZIHINTNTL-NEXT: addiw a0, a0, 31
+; RV64ZICBOPZIHINTNTL-NEXT: addi a0, a0, 31
; RV64ZICBOPZIHINTNTL-NEXT: ntl.all
; RV64ZICBOPZIHINTNTL-NEXT: prefetch.r 0(a0)
; RV64ZICBOPZIHINTNTL-NEXT: ret
@@ -1225,14 +1225,14 @@ define void @test_prefetch_constant_address_3() nounwind {
; RV64ZICBOP-LABEL: test_prefetch_constant_address_3:
; RV64ZICBOP: # %bb.0:
; RV64ZICBOP-NEXT: lui a0, 1048561
-; RV64ZICBOP-NEXT: addiw a0, a0, 31
+; RV64ZICBOP-NEXT: addi a0, a0, 31
; RV64ZICBOP-NEXT: prefetch.r 0(a0)
; RV64ZICBOP-NEXT: ret
;
; RV64ZICBOPZIHINTNTL-LABEL: test_prefetch_constant_address_3:
; RV64ZICBOPZIHINTNTL: # %bb.0:
; RV64ZICBOPZIHINTNTL-NEXT: lui a0, 1048561
-; RV64ZICBOPZIHINTNTL-NEXT: addiw a0, a0, 31
+; RV64ZICBOPZIHINTNTL-NEXT: addi a0, a0, 31
; RV64ZICBOPZIHINTNTL-NEXT: ntl.all
; RV64ZICBOPZIHINTNTL-NEXT: prefetch.r 0(a0)
; RV64ZICBOPZIHINTNTL-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/prolog-epilogue.ll b/llvm/test/CodeGen/RISCV/prolog-epilogue.ll
index 18cfa7233e4f7..e86b4125d327f 100644
--- a/llvm/test/CodeGen/RISCV/prolog-epilogue.ll
+++ b/llvm/test/CodeGen/RISCV/prolog-epilogue.ll
@@ -257,13 +257,13 @@ define void @frame_4kb_offset_128() {
; RV64I-NEXT: sd ra, 2024(sp) # 8-byte Folded Spill
; RV64I-NEXT: .cfi_offset ra, -8
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 128
+; RV64I-NEXT: addi a0, a0, 128
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 6256
; RV64I-NEXT: addi a0, sp, 8
; RV64I-NEXT: call callee
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 128
+; RV64I-NEXT: addi a0, a0, 128
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 2032
; RV64I-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
@@ -393,13 +393,13 @@ define void @frame_8kb_offset_128() {
; RV64I-NEXT: sd ra, 2024(sp) # 8-byte Folded Spill
; RV64I-NEXT: .cfi_offset ra, -8
; RV64I-NEXT: lui a0, 2
-; RV64I-NEXT: addiw a0, a0, 128
+; RV64I-NEXT: addi a0, a0, 128
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 10352
; RV64I-NEXT: addi a0, sp, 8
; RV64I-NEXT: call callee
; RV64I-NEXT: lui a0, 2
-; RV64I-NEXT: addiw a0, a0, 128
+; RV64I-NEXT: addi a0, a0, 128
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 2032
; RV64I-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
@@ -482,13 +482,13 @@ define void @frame_16kb_minus_80() {
; RV64I-NEXT: sd ra, 2024(sp) # 8-byte Folded Spill
; RV64I-NEXT: .cfi_offset ra, -8
; RV64I-NEXT: lui a0, 4
-; RV64I-NEXT: addiw a0, a0, -80
+; RV64I-NEXT: addi a0, a0, -80
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 18336
; RV64I-NEXT: addi a0, sp, 8
; RV64I-NEXT: call callee
; RV64I-NEXT: lui a0, 4
-; RV64I-NEXT: addiw a0, a0, -80
+; RV64I-NEXT: addi a0, a0, -80
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 2032
; RV64I-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rem.ll b/llvm/test/CodeGen/RISCV/rem.ll
index 612aaafc53bc8..12697e86a91d9 100644
--- a/llvm/test/CodeGen/RISCV/rem.ll
+++ b/llvm/test/CodeGen/RISCV/rem.ll
@@ -567,7 +567,7 @@ define i16 @urem16(i16 %a, i16 %b) nounwind {
; RV64I-NEXT: addi sp, sp, -16
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: call __umoddi3
diff --git a/llvm/test/CodeGen/RISCV/rv32i-rv64i-half.ll b/llvm/test/CodeGen/RISCV/rv32i-rv64i-half.ll
index 99b111b10f66f..8a05c867bb07b 100644
--- a/llvm/test/CodeGen/RISCV/rv32i-rv64i-half.ll
+++ b/llvm/test/CodeGen/RISCV/rv32i-rv64i-half.ll
@@ -51,7 +51,7 @@ define half @half_test(half %a, half %b) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw s2, a1, -1
+; RV64I-NEXT: addi s2, a1, -1
; RV64I-NEXT: and a0, a0, s2
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s1, a0
diff --git a/llvm/test/CodeGen/RISCV/rv64-float-convert.ll b/llvm/test/CodeGen/RISCV/rv64-float-convert.ll
index e387586cd46c2..6aae8984546ed 100644
--- a/llvm/test/CodeGen/RISCV/rv64-float-convert.ll
+++ b/llvm/test/CodeGen/RISCV/rv64-float-convert.ll
@@ -83,7 +83,7 @@ define i128 @fptosi_sat_f32_to_i128(float %a) nounwind {
; RV64I-NEXT: slli s3, s5, 63
; RV64I-NEXT: .LBB4_2:
; RV64I-NEXT: lui a1, 520192
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s1
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: mv s4, a0
@@ -168,7 +168,7 @@ define i128 @fptosi_sat_f32_to_i128(float %a) nounwind {
; RV64IZFINX-NEXT: slli a1, a2, 63
; RV64IZFINX-NEXT: .LBB4_2:
; RV64IZFINX-NEXT: lui a3, 520192
-; RV64IZFINX-NEXT: addiw a3, a3, -1
+; RV64IZFINX-NEXT: addi a3, a3, -1
; RV64IZFINX-NEXT: flt.s a3, a3, s0
; RV64IZFINX-NEXT: beqz a3, .LBB4_4
; RV64IZFINX-NEXT: # %bb.3:
@@ -202,7 +202,7 @@ define i128 @fptoui_sat_f32_to_i128(float %a) nounwind {
; RV64I-NEXT: sd s2, 0(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a1, 522240
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: sgtz a0, a0
; RV64I-NEXT: neg s1, a0
@@ -263,7 +263,7 @@ define i128 @fptoui_sat_f32_to_i128(float %a) nounwind {
; RV64IZFINX-NEXT: and a0, s1, a0
; RV64IZFINX-NEXT: lui a2, 522240
; RV64IZFINX-NEXT: and a1, s1, a1
-; RV64IZFINX-NEXT: addiw a2, a2, -1
+; RV64IZFINX-NEXT: addi a2, a2, -1
; RV64IZFINX-NEXT: flt.s a2, a2, s0
; RV64IZFINX-NEXT: neg a2, a2
; RV64IZFINX-NEXT: or a0, a2, a0
diff --git a/llvm/test/CodeGen/RISCV/rv64-half-convert.ll b/llvm/test/CodeGen/RISCV/rv64-half-convert.ll
index ea582ac258b71..57061e1bde83a 100644
--- a/llvm/test/CodeGen/RISCV/rv64-half-convert.ll
+++ b/llvm/test/CodeGen/RISCV/rv64-half-convert.ll
@@ -160,7 +160,7 @@ define i128 @fptosi_sat_f16_to_i128(half %a) nounwind {
; RV64I-NEXT: slli s3, s5, 63
; RV64I-NEXT: .LBB4_2:
; RV64I-NEXT: lui a1, 520192
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: mv a0, s2
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: mv s4, a0
@@ -246,7 +246,7 @@ define i128 @fptosi_sat_f16_to_i128(half %a) nounwind {
; RV64IZHINX-NEXT: slli a1, a2, 63
; RV64IZHINX-NEXT: .LBB4_2:
; RV64IZHINX-NEXT: lui a3, 520192
-; RV64IZHINX-NEXT: addiw a3, a3, -1
+; RV64IZHINX-NEXT: addi a3, a3, -1
; RV64IZHINX-NEXT: flt.s a3, a3, s0
; RV64IZHINX-NEXT: beqz a3, .LBB4_4
; RV64IZHINX-NEXT: # %bb.3:
@@ -281,7 +281,7 @@ define i128 @fptoui_sat_f16_to_i128(half %a) nounwind {
; RV64I-NEXT: call __extendhfsf2
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a1, 522240
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: call __gtsf2
; RV64I-NEXT: sgtz a0, a0
; RV64I-NEXT: neg s1, a0
@@ -336,7 +336,7 @@ define i128 @fptoui_sat_f16_to_i128(half %a) nounwind {
; RV64IZHINX-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64IZHINX-NEXT: fcvt.s.h a0, a0
; RV64IZHINX-NEXT: lui a1, 522240
-; RV64IZHINX-NEXT: addiw a1, a1, -1
+; RV64IZHINX-NEXT: addi a1, a1, -1
; RV64IZHINX-NEXT: fle.s a2, zero, a0
; RV64IZHINX-NEXT: flt.s a1, a1, a0
; RV64IZHINX-NEXT: neg s0, a1
diff --git a/llvm/test/CodeGen/RISCV/rv64-patchpoint.ll b/llvm/test/CodeGen/RISCV/rv64-patchpoint.ll
index beccd39e819d5..0850cc65d81ee 100644
--- a/llvm/test/CodeGen/RISCV/rv64-patchpoint.ll
+++ b/llvm/test/CodeGen/RISCV/rv64-patchpoint.ll
@@ -15,7 +15,7 @@ define i64 @trivial_patchpoint_codegen(i64 %p1, i64 %p2, i64 %p3, i64 %p4) {
; CHECK-NEXT: mv s0, a0
; CHECK-NEXT: .Ltmp0:
; CHECK-NEXT: lui ra, 3563
-; CHECK-NEXT: addiw ra, ra, -577
+; CHECK-NEXT: addi ra, ra, -577
; CHECK-NEXT: slli ra, ra, 12
; CHECK-NEXT: addi ra, ra, -259
; CHECK-NEXT: slli ra, ra, 12
@@ -26,7 +26,7 @@ define i64 @trivial_patchpoint_codegen(i64 %p1, i64 %p2, i64 %p3, i64 %p4) {
; CHECK-NEXT: mv a1, s1
; CHECK-NEXT: .Ltmp1:
; CHECK-NEXT: lui ra, 3563
-; CHECK-NEXT: addiw ra, ra, -577
+; CHECK-NEXT: addi ra, ra, -577
; CHECK-NEXT: slli ra, ra, 12
; CHECK-NEXT: addi ra, ra, -259
; CHECK-NEXT: slli ra, ra, 12
diff --git a/llvm/test/CodeGen/RISCV/rv64xtheadba.ll b/llvm/test/CodeGen/RISCV/rv64xtheadba.ll
index 05396e3355ff6..d20fb66dbbeea 100644
--- a/llvm/test/CodeGen/RISCV/rv64xtheadba.ll
+++ b/llvm/test/CodeGen/RISCV/rv64xtheadba.ll
@@ -507,7 +507,7 @@ define i64 @addmul4230(i64 %a, i64 %b) {
; CHECK-LABEL: addmul4230:
; CHECK: # %bb.0:
; CHECK-NEXT: lui a2, 1
-; CHECK-NEXT: addiw a2, a2, 134
+; CHECK-NEXT: addi a2, a2, 134
; CHECK-NEXT: mul a0, a0, a2
; CHECK-NEXT: add a0, a0, a1
; CHECK-NEXT: ret
@@ -1034,7 +1034,7 @@ define i64 @add4104(i64 %a) {
; RV64I-LABEL: add4104:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 8
+; RV64I-NEXT: addi a1, a1, 8
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1051,7 +1051,7 @@ define i64 @add4104_2(i64 %a) {
; RV64I-LABEL: add4104_2:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 8
+; RV64I-NEXT: addi a1, a1, 8
; RV64I-NEXT: or a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1068,7 +1068,7 @@ define i64 @add8208(i64 %a) {
; RV64I-LABEL: add8208:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 2
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/rv64xtheadbb.ll b/llvm/test/CodeGen/RISCV/rv64xtheadbb.ll
index 10ef3357d4783..00f7b462f68db 100644
--- a/llvm/test/CodeGen/RISCV/rv64xtheadbb.ll
+++ b/llvm/test/CodeGen/RISCV/rv64xtheadbb.ll
@@ -14,7 +14,7 @@ define signext i32 @ctlz_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -27,7 +27,7 @@ define signext i32 @ctlz_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -66,7 +66,7 @@ define signext i32 @log2_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -79,7 +79,7 @@ define signext i32 @log2_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -127,7 +127,7 @@ define signext i32 @log2_ceil_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a2, a1, 1
; RV64I-NEXT: lui a3, 349525
; RV64I-NEXT: or a1, a1, a2
-; RV64I-NEXT: addiw a2, a3, 1365
+; RV64I-NEXT: addi a2, a3, 1365
; RV64I-NEXT: srliw a3, a1, 2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srliw a3, a1, 4
@@ -140,7 +140,7 @@ define signext i32 @log2_ceil_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a3, a1, 1
; RV64I-NEXT: and a2, a3, a2
; RV64I-NEXT: lui a3, 209715
-; RV64I-NEXT: addiw a3, a3, 819
+; RV64I-NEXT: addi a3, a3, 819
; RV64I-NEXT: sub a1, a1, a2
; RV64I-NEXT: and a2, a1, a3
; RV64I-NEXT: srli a1, a1, 2
@@ -181,7 +181,7 @@ define signext i32 @findLastSet_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a1, a0, a1
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: srliw a3, a1, 2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srliw a3, a1, 4
@@ -194,7 +194,7 @@ define signext i32 @findLastSet_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a3, a1, 1
; RV64I-NEXT: and a2, a3, a2
; RV64I-NEXT: lui a3, 209715
-; RV64I-NEXT: addiw a3, a3, 819
+; RV64I-NEXT: addi a3, a3, 819
; RV64I-NEXT: sub a1, a1, a2
; RV64I-NEXT: and a2, a1, a3
; RV64I-NEXT: srli a1, a1, 2
@@ -242,7 +242,7 @@ define i32 @ctlz_lshr_i32(i32 signext %a) {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -255,7 +255,7 @@ define i32 @ctlz_lshr_i32(i32 signext %a) {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -299,8 +299,8 @@ define i64 @ctlz_i64(i64 %a) nounwind {
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: lui a3, 209715
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
-; RV64I-NEXT: addiw a2, a3, 819
+; RV64I-NEXT: addi a1, a2, 1365
+; RV64I-NEXT: addi a2, a3, 819
; RV64I-NEXT: srli a3, a0, 2
; RV64I-NEXT: or a0, a0, a3
; RV64I-NEXT: slli a3, a1, 32
@@ -319,7 +319,7 @@ define i64 @ctlz_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -849,7 +849,7 @@ define signext i32 @bswap_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a1, a0, 8
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: srliw a3, a0, 24
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a2, a0, a2
; RV64I-NEXT: or a1, a1, a3
@@ -905,7 +905,7 @@ define i64 @bswap_i64(i64 %a) {
; RV64I-NEXT: srli a3, a0, 56
; RV64I-NEXT: srli a4, a0, 24
; RV64I-NEXT: lui a5, 4080
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srli a3, a0, 8
diff --git a/llvm/test/CodeGen/RISCV/rv64zba.ll b/llvm/test/CodeGen/RISCV/rv64zba.ll
index 6bd808ca9e126..8982b60bf24bd 100644
--- a/llvm/test/CodeGen/RISCV/rv64zba.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zba.ll
@@ -1049,7 +1049,7 @@ define i64 @addmul4230(i64 %a, i64 %b) {
; CHECK-LABEL: addmul4230:
; CHECK: # %bb.0:
; CHECK-NEXT: lui a2, 1
-; CHECK-NEXT: addiw a2, a2, 134
+; CHECK-NEXT: addi a2, a2, 134
; CHECK-NEXT: mul a0, a0, a2
; CHECK-NEXT: add a0, a0, a1
; CHECK-NEXT: ret
@@ -2061,7 +2061,7 @@ define i64 @add4104(i64 %a) {
; RV64I-LABEL: add4104:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 8
+; RV64I-NEXT: addi a1, a1, 8
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
;
@@ -2084,7 +2084,7 @@ define i64 @add4104_2(i64 %a) {
; RV64I-LABEL: add4104_2:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 8
+; RV64I-NEXT: addi a1, a1, 8
; RV64I-NEXT: or a0, a0, a1
; RV64I-NEXT: ret
;
@@ -2107,7 +2107,7 @@ define i64 @add8208(i64 %a) {
; RV64I-LABEL: add8208:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 2
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
;
@@ -4101,7 +4101,7 @@ define i64 @srli_slli_i16(i64 %1) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: slli a0, a0, 2
; CHECK-NEXT: lui a1, 256
-; CHECK-NEXT: addiw a1, a1, -16
+; CHECK-NEXT: addi a1, a1, -16
; CHECK-NEXT: and a0, a0, a1
; CHECK-NEXT: ret
entry:
diff --git a/llvm/test/CodeGen/RISCV/rv64zbb-intrinsic.ll b/llvm/test/CodeGen/RISCV/rv64zbb-intrinsic.ll
index 3f984deccfb2c..4ec5ec6a8e196 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbb-intrinsic.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbb-intrinsic.ll
@@ -61,7 +61,7 @@ define i64 @orcb64_knownbits(i64 %a) nounwind {
; RV64ZBB-NEXT: lui a1, 65535
; RV64ZBB-NEXT: lui a2, 256
; RV64ZBB-NEXT: slli a1, a1, 12
-; RV64ZBB-NEXT: addiw a2, a2, 8
+; RV64ZBB-NEXT: addi a2, a2, 8
; RV64ZBB-NEXT: and a0, a0, a1
; RV64ZBB-NEXT: slli a1, a2, 42
; RV64ZBB-NEXT: add a1, a2, a1
diff --git a/llvm/test/CodeGen/RISCV/rv64zbb.ll b/llvm/test/CodeGen/RISCV/rv64zbb.ll
index 97b8b2aa5e525..e6407279870db 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbb.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbb.ll
@@ -14,7 +14,7 @@ define signext i32 @ctlz_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -27,7 +27,7 @@ define signext i32 @ctlz_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -64,7 +64,7 @@ define signext i32 @log2_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -77,7 +77,7 @@ define signext i32 @log2_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -123,7 +123,7 @@ define signext i32 @log2_ceil_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a2, a1, 1
; RV64I-NEXT: lui a3, 349525
; RV64I-NEXT: or a1, a1, a2
-; RV64I-NEXT: addiw a2, a3, 1365
+; RV64I-NEXT: addi a2, a3, 1365
; RV64I-NEXT: srliw a3, a1, 2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srliw a3, a1, 4
@@ -136,7 +136,7 @@ define signext i32 @log2_ceil_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a3, a1, 1
; RV64I-NEXT: and a2, a3, a2
; RV64I-NEXT: lui a3, 209715
-; RV64I-NEXT: addiw a3, a3, 819
+; RV64I-NEXT: addi a3, a3, 819
; RV64I-NEXT: sub a1, a1, a2
; RV64I-NEXT: and a2, a1, a3
; RV64I-NEXT: srli a1, a1, 2
@@ -175,7 +175,7 @@ define signext i32 @findLastSet_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a1, a0, a1
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: srliw a3, a1, 2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srliw a3, a1, 4
@@ -188,7 +188,7 @@ define signext i32 @findLastSet_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a3, a1, 1
; RV64I-NEXT: and a2, a3, a2
; RV64I-NEXT: lui a3, 209715
-; RV64I-NEXT: addiw a3, a3, 819
+; RV64I-NEXT: addi a3, a3, 819
; RV64I-NEXT: sub a1, a1, a2
; RV64I-NEXT: and a2, a1, a3
; RV64I-NEXT: srli a1, a1, 2
@@ -234,7 +234,7 @@ define i32 @ctlz_lshr_i32(i32 signext %a) {
; RV64I-NEXT: srliw a1, a0, 1
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srliw a2, a0, 2
; RV64I-NEXT: or a0, a0, a2
; RV64I-NEXT: srliw a2, a0, 4
@@ -247,7 +247,7 @@ define i32 @ctlz_lshr_i32(i32 signext %a) {
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -289,8 +289,8 @@ define i64 @ctlz_i64(i64 %a) nounwind {
; RV64I-NEXT: lui a2, 349525
; RV64I-NEXT: lui a3, 209715
; RV64I-NEXT: or a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
-; RV64I-NEXT: addiw a2, a3, 819
+; RV64I-NEXT: addi a1, a2, 1365
+; RV64I-NEXT: addi a2, a3, 819
; RV64I-NEXT: srli a3, a0, 2
; RV64I-NEXT: or a0, a0, a3
; RV64I-NEXT: slli a3, a1, 32
@@ -309,7 +309,7 @@ define i64 @ctlz_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -567,10 +567,10 @@ define signext i32 @ctpop_i32(i32 signext %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: srli a1, a0, 1
; RV64I-NEXT: lui a2, 349525
-; RV64I-NEXT: addiw a2, a2, 1365
+; RV64I-NEXT: addi a2, a2, 1365
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -677,11 +677,11 @@ define signext i32 @ctpop_i32_load(ptr %p) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lw a0, 0(a0)
; RV64I-NEXT: lui a1, 349525
-; RV64I-NEXT: addiw a1, a1, 1365
+; RV64I-NEXT: addi a1, a1, 1365
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -718,11 +718,11 @@ define <2 x i32> @ctpop_v2i32(<2 x i32> %a) nounwind {
; RV64I-NEXT: lui a3, 349525
; RV64I-NEXT: lui a4, 209715
; RV64I-NEXT: srli a5, a1, 1
-; RV64I-NEXT: addiw a3, a3, 1365
+; RV64I-NEXT: addi a3, a3, 1365
; RV64I-NEXT: and a2, a2, a3
; RV64I-NEXT: and a3, a5, a3
; RV64I-NEXT: lui a5, 61681
-; RV64I-NEXT: addiw a4, a4, 819
+; RV64I-NEXT: addi a4, a4, 819
; RV64I-NEXT: addi a5, a5, -241
; RV64I-NEXT: sub a0, a0, a2
; RV64I-NEXT: sub a1, a1, a3
@@ -876,8 +876,8 @@ define i64 @ctpop_i64(i64 %a) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 349525
; RV64I-NEXT: lui a2, 209715
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: slli a3, a1, 32
; RV64I-NEXT: add a1, a1, a3
; RV64I-NEXT: slli a3, a2, 32
@@ -885,7 +885,7 @@ define i64 @ctpop_i64(i64 %a) nounwind {
; RV64I-NEXT: srli a3, a0, 1
; RV64I-NEXT: and a1, a3, a1
; RV64I-NEXT: lui a3, 61681
-; RV64I-NEXT: addiw a3, a3, -241
+; RV64I-NEXT: addi a3, a3, -241
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
@@ -998,9 +998,9 @@ define <2 x i64> @ctpop_v2i64(<2 x i64> %a) nounwind {
; RV64I-NEXT: lui a3, 349525
; RV64I-NEXT: lui a4, 209715
; RV64I-NEXT: lui a5, 61681
-; RV64I-NEXT: addiw a3, a3, 1365
-; RV64I-NEXT: addiw a4, a4, 819
-; RV64I-NEXT: addiw a5, a5, -241
+; RV64I-NEXT: addi a3, a3, 1365
+; RV64I-NEXT: addi a4, a4, 819
+; RV64I-NEXT: addi a5, a5, -241
; RV64I-NEXT: slli a6, a3, 32
; RV64I-NEXT: add a3, a3, a6
; RV64I-NEXT: slli a6, a4, 32
@@ -1453,7 +1453,7 @@ define signext i32 @bswap_i32(i32 signext %a) nounwind {
; RV64I-NEXT: srli a1, a0, 8
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: srliw a3, a0, 24
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a2, a0, a2
; RV64I-NEXT: or a1, a1, a3
@@ -1511,7 +1511,7 @@ define i64 @bswap_i64(i64 %a) {
; RV64I-NEXT: srli a3, a0, 56
; RV64I-NEXT: srli a4, a0, 24
; RV64I-NEXT: lui a5, 4080
-; RV64I-NEXT: addiw a2, a2, -256
+; RV64I-NEXT: addi a2, a2, -256
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: or a1, a1, a3
; RV64I-NEXT: srli a3, a0, 8
@@ -1572,7 +1572,7 @@ define i32 @orc_b_i32(i32 %a) {
; RV64ZBB-LABEL: orc_b_i32:
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: lui a1, 4112
-; RV64ZBB-NEXT: addiw a1, a1, 257
+; RV64ZBB-NEXT: addi a1, a1, 257
; RV64ZBB-NEXT: and a0, a0, a1
; RV64ZBB-NEXT: orc.b a0, a0
; RV64ZBB-NEXT: ret
@@ -1585,7 +1585,7 @@ define i64 @orc_b_i64(i64 %a) {
; RV64I-LABEL: orc_b_i64:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 4112
-; RV64I-NEXT: addiw a1, a1, 257
+; RV64I-NEXT: addi a1, a1, 257
; RV64I-NEXT: slli a2, a1, 32
; RV64I-NEXT: add a1, a1, a2
; RV64I-NEXT: and a0, a0, a1
@@ -1596,7 +1596,7 @@ define i64 @orc_b_i64(i64 %a) {
; RV64ZBB-LABEL: orc_b_i64:
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: lui a1, 4112
-; RV64ZBB-NEXT: addiw a1, a1, 257
+; RV64ZBB-NEXT: addi a1, a1, 257
; RV64ZBB-NEXT: slli a2, a1, 32
; RV64ZBB-NEXT: add a1, a1, a2
; RV64ZBB-NEXT: and a0, a0, a1
@@ -1755,7 +1755,7 @@ define i16 @sub_if_uge_i16(i16 %x, i16 %y) {
; RV64I-LABEL: sub_if_uge_i16:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a3, a1, a2
; RV64I-NEXT: and a2, a0, a2
; RV64I-NEXT: sltu a2, a2, a3
@@ -1978,7 +1978,7 @@ define i32 @sub_if_uge_C_i32(i32 signext %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 16
; RV64I-NEXT: lui a2, 1048560
-; RV64I-NEXT: addiw a1, a1, -16
+; RV64I-NEXT: addi a1, a1, -16
; RV64I-NEXT: sltu a1, a1, a0
; RV64I-NEXT: negw a1, a1
; RV64I-NEXT: addi a2, a2, 15
@@ -2004,8 +2004,8 @@ define i64 @sub_if_uge_C_i64(i64 %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 298
; RV64I-NEXT: lui a2, 1046192
-; RV64I-NEXT: addiw a1, a1, 95
-; RV64I-NEXT: addiw a2, a2, -761
+; RV64I-NEXT: addi a1, a1, 95
+; RV64I-NEXT: addi a2, a2, -761
; RV64I-NEXT: slli a1, a1, 12
; RV64I-NEXT: addi a1, a1, 511
; RV64I-NEXT: sltu a1, a1, a0
@@ -2018,7 +2018,7 @@ define i64 @sub_if_uge_C_i64(i64 %x) {
; RV64ZBB-LABEL: sub_if_uge_C_i64:
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: lui a1, 1046192
-; RV64ZBB-NEXT: addiw a1, a1, -761
+; RV64ZBB-NEXT: addi a1, a1, -761
; RV64ZBB-NEXT: slli a1, a1, 9
; RV64ZBB-NEXT: add a1, a0, a1
; RV64ZBB-NEXT: minu a0, a1, a0
@@ -2034,7 +2034,7 @@ define i32 @sub_if_uge_C_multiuse_cmp_i32(i32 signext %x, ptr %z) {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: lui a3, 1048560
-; RV64I-NEXT: addiw a2, a2, -16
+; RV64I-NEXT: addi a2, a2, -16
; RV64I-NEXT: sltu a2, a2, a0
; RV64I-NEXT: negw a4, a2
; RV64I-NEXT: addi a3, a3, 15
@@ -2047,7 +2047,7 @@ define i32 @sub_if_uge_C_multiuse_cmp_i32(i32 signext %x, ptr %z) {
; RV64ZBB: # %bb.0:
; RV64ZBB-NEXT: lui a2, 16
; RV64ZBB-NEXT: lui a3, 1048560
-; RV64ZBB-NEXT: addiw a2, a2, -16
+; RV64ZBB-NEXT: addi a2, a2, -16
; RV64ZBB-NEXT: addi a3, a3, 15
; RV64ZBB-NEXT: sltu a2, a2, a0
; RV64ZBB-NEXT: addw a3, a0, a3
@@ -2069,7 +2069,7 @@ define i32 @sub_if_uge_C_multiuse_sub_i32(i32 signext %x, ptr %z) {
; RV64I-NEXT: lui a3, 16
; RV64I-NEXT: addi a2, a2, 15
; RV64I-NEXT: addw a2, a0, a2
-; RV64I-NEXT: addiw a3, a3, -16
+; RV64I-NEXT: addi a3, a3, -16
; RV64I-NEXT: sw a2, 0(a1)
; RV64I-NEXT: bltu a3, a0, .LBB75_2
; RV64I-NEXT: # %bb.1:
@@ -2098,7 +2098,7 @@ define i32 @sub_if_uge_C_swapped_i32(i32 signext %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 16
; RV64I-NEXT: lui a2, 1048560
-; RV64I-NEXT: addiw a1, a1, -15
+; RV64I-NEXT: addi a1, a1, -15
; RV64I-NEXT: sltu a1, a0, a1
; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: addi a2, a2, 15
diff --git a/llvm/test/CodeGen/RISCV/rv64zbkb.ll b/llvm/test/CodeGen/RISCV/rv64zbkb.ll
index 734bffee96f09..696c2a5e0f806 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbkb.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbkb.ll
@@ -319,7 +319,7 @@ define i64 @pack_i64_imm() {
; RV64I-LABEL: pack_i64_imm:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a0, 65793
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: slli a1, a0, 32
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rv64zbs.ll b/llvm/test/CodeGen/RISCV/rv64zbs.ll
index c1e1e16d0d4ae..a8b06b2af5764 100644
--- a/llvm/test/CodeGen/RISCV/rv64zbs.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zbs.ll
@@ -513,7 +513,7 @@ define signext i32 @bclri_i32_11(i32 signext %a) nounwind {
; RV64I-LABEL: bclri_i32_11:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2047
+; RV64I-NEXT: addi a1, a1, 2047
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: ret
;
@@ -529,7 +529,7 @@ define signext i32 @bclri_i32_30(i32 signext %a) nounwind {
; RV64I-LABEL: bclri_i32_30:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 786432
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: ret
;
@@ -564,7 +564,7 @@ define i64 @bclri_i64_11(i64 %a) nounwind {
; RV64I-LABEL: bclri_i64_11:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1048575
-; RV64I-NEXT: addiw a1, a1, 2047
+; RV64I-NEXT: addi a1, a1, 2047
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: ret
;
@@ -580,7 +580,7 @@ define i64 @bclri_i64_30(i64 %a) nounwind {
; RV64I-LABEL: bclri_i64_30:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 786432
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: ret
;
@@ -644,7 +644,7 @@ define i64 @bclri_i64_large0(i64 %a) nounwind {
; RV64I-LABEL: bclri_i64_large0:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1044480
-; RV64I-NEXT: addiw a1, a1, -256
+; RV64I-NEXT: addi a1, a1, -256
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: ret
;
@@ -661,7 +661,7 @@ define i64 @bclri_i64_large1(i64 %a) nounwind {
; RV64I-LABEL: bclri_i64_large1:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1044464
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: ret
;
@@ -972,7 +972,7 @@ define i64 @xor_i64_4099(i64 %a) nounwind {
; RV64I-LABEL: xor_i64_4099:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 3
+; RV64I-NEXT: addi a1, a1, 3
; RV64I-NEXT: xor a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1016,7 +1016,7 @@ define i64 @xor_i64_66901(i64 %a) nounwind {
; RV64I-LABEL: xor_i64_66901:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 1365
+; RV64I-NEXT: addi a1, a1, 1365
; RV64I-NEXT: xor a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1033,7 +1033,7 @@ define i64 @or_i64_4099(i64 %a) nounwind {
; RV64I-LABEL: or_i64_4099:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 3
+; RV64I-NEXT: addi a1, a1, 3
; RV64I-NEXT: or a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1059,7 +1059,7 @@ define i64 @or_i64_66901(i64 %a) nounwind {
; RV64I-LABEL: or_i64_66901:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 1365
+; RV64I-NEXT: addi a1, a1, 1365
; RV64I-NEXT: or a0, a0, a1
; RV64I-NEXT: ret
;
@@ -1203,7 +1203,7 @@ define i1 @icmp_eq_nonpow2(i32 signext %x) nounwind {
; CHECK-LABEL: icmp_eq_nonpow2:
; CHECK: # %bb.0:
; CHECK-NEXT: lui a1, 8
-; CHECK-NEXT: addiw a1, a1, -1
+; CHECK-NEXT: addi a1, a1, -1
; CHECK-NEXT: xor a0, a0, a1
; CHECK-NEXT: seqz a0, a0
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll
index 4d955d46f2fd1..a75c159339bed 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll
@@ -782,7 +782,7 @@ define <vscale x 1 x i64> @bitreverse_nxv1i64(<vscale x 1 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v10, v8, a1
; RV64-NEXT: vsrl.vx v11, v8, a0
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v11, v11, a2
; RV64-NEXT: vor.vv v10, v11, v10
; RV64-NEXT: vsrl.vi v11, v8, 8
@@ -801,9 +801,9 @@ define <vscale x 1 x i64> @bitreverse_nxv1i64(<vscale x 1 x i64> %va) {
; RV64-NEXT: vor.vv v10, v11, v10
; RV64-NEXT: vsll.vx v11, v8, a1
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 819
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 819
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: vand.vx v8, v8, a2
; RV64-NEXT: slli a2, a3, 32
; RV64-NEXT: vsll.vx v8, v8, a0
@@ -923,7 +923,7 @@ define <vscale x 2 x i64> @bitreverse_nxv2i64(<vscale x 2 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v12, v8, a1
; RV64-NEXT: vsrl.vx v14, v8, a0
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v14, v14, a2
; RV64-NEXT: vor.vv v12, v14, v12
; RV64-NEXT: vsrl.vi v14, v8, 8
@@ -942,9 +942,9 @@ define <vscale x 2 x i64> @bitreverse_nxv2i64(<vscale x 2 x i64> %va) {
; RV64-NEXT: vor.vv v12, v14, v12
; RV64-NEXT: vsll.vx v14, v8, a1
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 819
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 819
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: vand.vx v8, v8, a2
; RV64-NEXT: slli a2, a3, 32
; RV64-NEXT: vsll.vx v8, v8, a0
@@ -1064,7 +1064,7 @@ define <vscale x 4 x i64> @bitreverse_nxv4i64(<vscale x 4 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v12, v8, a1
; RV64-NEXT: vsrl.vx v20, v8, a0
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v20, v20, a2
; RV64-NEXT: vor.vv v12, v20, v12
; RV64-NEXT: vsrl.vi v20, v8, 8
@@ -1083,9 +1083,9 @@ define <vscale x 4 x i64> @bitreverse_nxv4i64(<vscale x 4 x i64> %va) {
; RV64-NEXT: vor.vv v16, v16, v20
; RV64-NEXT: vsll.vx v20, v8, a1
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 819
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 819
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: vand.vx v8, v8, a2
; RV64-NEXT: slli a2, a3, 32
; RV64-NEXT: vsll.vx v8, v8, a0
@@ -1227,7 +1227,7 @@ define <vscale x 8 x i64> @bitreverse_nxv8i64(<vscale x 8 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v16, v8, a1
; RV64-NEXT: vsrl.vx v0, v8, a0
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v0, v0, a2
; RV64-NEXT: vor.vv v16, v0, v16
; RV64-NEXT: vsrl.vi v0, v8, 8
@@ -1246,9 +1246,9 @@ define <vscale x 8 x i64> @bitreverse_nxv8i64(<vscale x 8 x i64> %va) {
; RV64-NEXT: vor.vv v24, v24, v0
; RV64-NEXT: vsll.vx v0, v8, a1
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 819
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 819
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: vand.vx v8, v8, a2
; RV64-NEXT: slli a2, a3, 32
; RV64-NEXT: vsll.vx v8, v8, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
index a68efe9929217..f704a8ca875ba 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
@@ -1512,9 +1512,9 @@ define <vscale x 1 x i64> @vp_bitreverse_nxv1i64(<vscale x 1 x i64> %va, <vscale
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 209715
; RV64-NEXT: lui a7, 349525
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 819
-; RV64-NEXT: addiw a7, a7, 1365
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 819
+; RV64-NEXT: addi a7, a7, 1365
; RV64-NEXT: slli t0, a5, 32
; RV64-NEXT: add t0, a5, t0
; RV64-NEXT: slli a5, a6, 32
@@ -1525,7 +1525,7 @@ define <vscale x 1 x i64> @vp_bitreverse_nxv1i64(<vscale x 1 x i64> %va, <vscale
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vand.vx v9, v8, a1, v0.t
; RV64-NEXT: slli a3, a3, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v9, v9, 24, v0.t
; RV64-NEXT: vand.vx v10, v8, a3, v0.t
; RV64-NEXT: vsll.vi v10, v10, 8, v0.t
@@ -1653,7 +1653,7 @@ define <vscale x 1 x i64> @vp_bitreverse_nxv1i64_unmasked(<vscale x 1 x i64> %va
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vsrl.vi v9, v8, 24
; RV64-NEXT: vsrl.vi v10, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v11, v8, a3
; RV64-NEXT: vsrl.vx v12, v8, a5
; RV64-NEXT: vand.vx v12, v12, a0
@@ -1674,9 +1674,9 @@ define <vscale x 1 x i64> @vp_bitreverse_nxv1i64_unmasked(<vscale x 1 x i64> %va
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -1794,9 +1794,9 @@ define <vscale x 2 x i64> @vp_bitreverse_nxv2i64(<vscale x 2 x i64> %va, <vscale
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 209715
; RV64-NEXT: lui a7, 349525
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 819
-; RV64-NEXT: addiw a7, a7, 1365
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 819
+; RV64-NEXT: addi a7, a7, 1365
; RV64-NEXT: slli t0, a5, 32
; RV64-NEXT: add t0, a5, t0
; RV64-NEXT: slli a5, a6, 32
@@ -1807,7 +1807,7 @@ define <vscale x 2 x i64> @vp_bitreverse_nxv2i64(<vscale x 2 x i64> %va, <vscale
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vand.vx v10, v8, a1, v0.t
; RV64-NEXT: slli a3, a3, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v10, v10, 24, v0.t
; RV64-NEXT: vand.vx v12, v8, a3, v0.t
; RV64-NEXT: vsll.vi v12, v12, 8, v0.t
@@ -1935,7 +1935,7 @@ define <vscale x 2 x i64> @vp_bitreverse_nxv2i64_unmasked(<vscale x 2 x i64> %va
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vsrl.vi v12, v8, 24
; RV64-NEXT: vsrl.vi v14, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v10, v8, a3
; RV64-NEXT: vsrl.vx v16, v8, a5
; RV64-NEXT: vand.vx v16, v16, a0
@@ -1956,9 +1956,9 @@ define <vscale x 2 x i64> @vp_bitreverse_nxv2i64_unmasked(<vscale x 2 x i64> %va
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2076,9 +2076,9 @@ define <vscale x 4 x i64> @vp_bitreverse_nxv4i64(<vscale x 4 x i64> %va, <vscale
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 209715
; RV64-NEXT: lui a7, 349525
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 819
-; RV64-NEXT: addiw a7, a7, 1365
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 819
+; RV64-NEXT: addi a7, a7, 1365
; RV64-NEXT: slli t0, a5, 32
; RV64-NEXT: add t0, a5, t0
; RV64-NEXT: slli a5, a6, 32
@@ -2089,7 +2089,7 @@ define <vscale x 4 x i64> @vp_bitreverse_nxv4i64(<vscale x 4 x i64> %va, <vscale
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vand.vx v12, v8, a1, v0.t
; RV64-NEXT: slli a3, a3, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v12, v12, 24, v0.t
; RV64-NEXT: vand.vx v16, v8, a3, v0.t
; RV64-NEXT: vsll.vi v16, v16, 8, v0.t
@@ -2217,7 +2217,7 @@ define <vscale x 4 x i64> @vp_bitreverse_nxv4i64_unmasked(<vscale x 4 x i64> %va
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vsrl.vi v16, v8, 24
; RV64-NEXT: vsrl.vi v20, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v12, v8, a3
; RV64-NEXT: vsrl.vx v24, v8, a5
; RV64-NEXT: vand.vx v24, v24, a0
@@ -2238,9 +2238,9 @@ define <vscale x 4 x i64> @vp_bitreverse_nxv4i64_unmasked(<vscale x 4 x i64> %va
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2414,7 +2414,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -2441,9 +2441,9 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2593,7 +2593,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64_unmasked(<vscale x 7 x i64> %va
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -2617,9 +2617,9 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64_unmasked(<vscale x 7 x i64> %va
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2801,7 +2801,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64(<vscale x 8 x i64> %va, <vscale
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -2828,9 +2828,9 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64(<vscale x 8 x i64> %va, <vscale
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2980,7 +2980,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64_unmasked(<vscale x 8 x i64> %va
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -3004,9 +3004,9 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64_unmasked(<vscale x 8 x i64> %va
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll
index 5bd15df031dac..b8521c37e4906 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll
@@ -304,7 +304,7 @@ define <vscale x 1 x i64> @bswap_nxv1i64(<vscale x 1 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v10, v8, a0
; RV64-NEXT: vsrl.vx v11, v8, a1
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v11, v11, a2
; RV64-NEXT: vor.vv v10, v11, v10
; RV64-NEXT: vsrl.vi v11, v8, 8
@@ -388,7 +388,7 @@ define <vscale x 2 x i64> @bswap_nxv2i64(<vscale x 2 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v12, v8, a0
; RV64-NEXT: vsrl.vx v14, v8, a1
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v14, v14, a2
; RV64-NEXT: vor.vv v12, v14, v12
; RV64-NEXT: vsrl.vi v14, v8, 8
@@ -472,7 +472,7 @@ define <vscale x 4 x i64> @bswap_nxv4i64(<vscale x 4 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v16, v8, a0
; RV64-NEXT: vsrl.vx v20, v8, a1
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v20, v20, a2
; RV64-NEXT: vor.vv v16, v20, v16
; RV64-NEXT: vsrl.vi v20, v8, 8
@@ -578,7 +578,7 @@ define <vscale x 8 x i64> @bswap_nxv8i64(<vscale x 8 x i64> %va) {
; RV64-NEXT: lui a3, 4080
; RV64-NEXT: vsrl.vx v16, v8, a0
; RV64-NEXT: vsrl.vx v0, v8, a1
-; RV64-NEXT: addiw a2, a2, -256
+; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vand.vx v0, v0, a2
; RV64-NEXT: vor.vv v16, v0, v16
; RV64-NEXT: vsrl.vi v0, v8, 8
diff --git a/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll b/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
index ca637432cc0cd..3d31cf80cdd3a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
@@ -559,7 +559,7 @@ define <vscale x 1 x i64> @vp_bswap_nxv1i64(<vscale x 1 x i64> %va, <vscale x 1
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vand.vx v9, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v9, v9, 24, v0.t
; RV64-NEXT: vand.vx v10, v8, a2, v0.t
; RV64-NEXT: vsll.vi v10, v10, 8, v0.t
@@ -642,7 +642,7 @@ define <vscale x 1 x i64> @vp_bswap_nxv1i64_unmasked(<vscale x 1 x i64> %va, i32
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vsrl.vi v9, v8, 24
; RV64-NEXT: vsrl.vi v10, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v11, v8, a3
; RV64-NEXT: vsrl.vx v12, v8, a5
; RV64-NEXT: vand.vx v12, v12, a0
@@ -727,7 +727,7 @@ define <vscale x 2 x i64> @vp_bswap_nxv2i64(<vscale x 2 x i64> %va, <vscale x 2
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vand.vx v10, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v10, v10, 24, v0.t
; RV64-NEXT: vand.vx v12, v8, a2, v0.t
; RV64-NEXT: vsll.vi v12, v12, 8, v0.t
@@ -810,7 +810,7 @@ define <vscale x 2 x i64> @vp_bswap_nxv2i64_unmasked(<vscale x 2 x i64> %va, i32
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vsrl.vi v10, v8, 24
; RV64-NEXT: vsrl.vi v12, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v14, v8, a3
; RV64-NEXT: vsrl.vx v16, v8, a5
; RV64-NEXT: vand.vx v16, v16, a0
@@ -895,7 +895,7 @@ define <vscale x 4 x i64> @vp_bswap_nxv4i64(<vscale x 4 x i64> %va, <vscale x 4
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vand.vx v12, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v12, v12, 24, v0.t
; RV64-NEXT: vand.vx v16, v8, a2, v0.t
; RV64-NEXT: vsll.vi v16, v16, 8, v0.t
@@ -978,7 +978,7 @@ define <vscale x 4 x i64> @vp_bswap_nxv4i64_unmasked(<vscale x 4 x i64> %va, i32
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vsrl.vi v12, v8, 24
; RV64-NEXT: vsrl.vi v16, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v20, v8, a3
; RV64-NEXT: vsrl.vx v24, v8, a5
; RV64-NEXT: vand.vx v24, v24, a0
@@ -1118,7 +1118,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -1239,7 +1239,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64_unmasked(<vscale x 7 x i64> %va, i32
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -1390,7 +1390,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -1511,7 +1511,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64_unmasked(<vscale x 8 x i64> %va, i32
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -1709,7 +1709,7 @@ define <vscale x 1 x i48> @vp_bswap_nxv1i48(<vscale x 1 x i48> %va, <vscale x 1
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vand.vx v9, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v9, v9, 24, v0.t
; RV64-NEXT: vand.vx v10, v8, a2, v0.t
; RV64-NEXT: vsll.vi v10, v10, 8, v0.t
diff --git a/llvm/test/CodeGen/RISCV/rvv/ctlz-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/ctlz-sdnode.ll
index 97e1a7f41b92f..319d82f724ca7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ctlz-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ctlz-sdnode.ll
@@ -1189,10 +1189,10 @@ define <vscale x 1 x i64> @ctlz_nxv1i64(<vscale x 1 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -1329,10 +1329,10 @@ define <vscale x 2 x i64> @ctlz_nxv2i64(<vscale x 2 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -1469,10 +1469,10 @@ define <vscale x 4 x i64> @ctlz_nxv4i64(<vscale x 4 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -1609,10 +1609,10 @@ define <vscale x 8 x i64> @ctlz_nxv8i64(<vscale x 8 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -2791,10 +2791,10 @@ define <vscale x 1 x i64> @ctlz_zero_undef_nxv1i64(<vscale x 1 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -2925,10 +2925,10 @@ define <vscale x 2 x i64> @ctlz_zero_undef_nxv2i64(<vscale x 2 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -3059,10 +3059,10 @@ define <vscale x 4 x i64> @ctlz_zero_undef_nxv4i64(<vscale x 4 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -3193,10 +3193,10 @@ define <vscale x 8 x i64> @ctlz_zero_undef_nxv8i64(<vscale x 8 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/ctpop-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/ctpop-sdnode.ll
index fa8e332e5076d..1018130b472d1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ctpop-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ctpop-sdnode.ll
@@ -717,10 +717,10 @@ define <vscale x 1 x i64> @ctpop_nxv1i64(<vscale x 1 x i64> %va) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -800,10 +800,10 @@ define <vscale x 2 x i64> @ctpop_nxv2i64(<vscale x 2 x i64> %va) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -883,10 +883,10 @@ define <vscale x 4 x i64> @ctpop_nxv4i64(<vscale x 4 x i64> %va) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -966,10 +966,10 @@ define <vscale x 8 x i64> @ctpop_nxv8i64(<vscale x 8 x i64> %va) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
index c70d5b9954f92..fba27e3d548cf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
@@ -1227,10 +1227,10 @@ define <vscale x 1 x i64> @vp_ctpop_nxv1i64(<vscale x 1 x i64> %va, <vscale x 1
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1309,10 +1309,10 @@ define <vscale x 1 x i64> @vp_ctpop_nxv1i64_unmasked(<vscale x 1 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1393,10 +1393,10 @@ define <vscale x 2 x i64> @vp_ctpop_nxv2i64(<vscale x 2 x i64> %va, <vscale x 2
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1475,10 +1475,10 @@ define <vscale x 2 x i64> @vp_ctpop_nxv2i64_unmasked(<vscale x 2 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1559,10 +1559,10 @@ define <vscale x 4 x i64> @vp_ctpop_nxv4i64(<vscale x 4 x i64> %va, <vscale x 4
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1641,10 +1641,10 @@ define <vscale x 4 x i64> @vp_ctpop_nxv4i64_unmasked(<vscale x 4 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1725,10 +1725,10 @@ define <vscale x 7 x i64> @vp_ctpop_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1807,10 +1807,10 @@ define <vscale x 7 x i64> @vp_ctpop_nxv7i64_unmasked(<vscale x 7 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1891,10 +1891,10 @@ define <vscale x 8 x i64> @vp_ctpop_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1973,10 +1973,10 @@ define <vscale x 8 x i64> @vp_ctpop_nxv8i64_unmasked(<vscale x 8 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2165,10 +2165,10 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64-NEXT: lui a3, 209715
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a4, a4, -241
-; RV64-NEXT: addiw a5, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a4, a4, -241
+; RV64-NEXT: addi a5, a5, 257
; RV64-NEXT: slli a6, a2, 32
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
@@ -2356,10 +2356,10 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64_unmasked(<vscale x 16 x i64> %va,
; RV64-NEXT: lui a4, 209715
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 4112
-; RV64-NEXT: addiw a3, a3, 1365
-; RV64-NEXT: addiw a4, a4, 819
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 257
+; RV64-NEXT: addi a3, a3, 1365
+; RV64-NEXT: addi a4, a4, 819
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 257
; RV64-NEXT: slli a7, a3, 32
; RV64-NEXT: add a3, a3, a7
; RV64-NEXT: slli a7, a4, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/cttz-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/cttz-sdnode.ll
index 09aa487cb085f..faa3c48c49e50 100644
--- a/llvm/test/CodeGen/RISCV/rvv/cttz-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/cttz-sdnode.ll
@@ -1162,10 +1162,10 @@ define <vscale x 1 x i64> @cttz_nxv1i64(<vscale x 1 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -1286,10 +1286,10 @@ define <vscale x 2 x i64> @cttz_nxv2i64(<vscale x 2 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -1410,10 +1410,10 @@ define <vscale x 4 x i64> @cttz_nxv4i64(<vscale x 4 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -1534,10 +1534,10 @@ define <vscale x 8 x i64> @cttz_nxv8i64(<vscale x 8 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -2666,10 +2666,10 @@ define <vscale x 1 x i64> @cttz_zero_undef_nxv1i64(<vscale x 1 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -2782,10 +2782,10 @@ define <vscale x 2 x i64> @cttz_zero_undef_nxv2i64(<vscale x 2 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -2898,10 +2898,10 @@ define <vscale x 4 x i64> @cttz_zero_undef_nxv4i64(<vscale x 4 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
@@ -3014,10 +3014,10 @@ define <vscale x 8 x i64> @cttz_zero_undef_nxv8i64(<vscale x 8 x i64> %va) {
; RV64I-NEXT: lui a1, 209715
; RV64I-NEXT: lui a2, 61681
; RV64I-NEXT: lui a3, 4112
-; RV64I-NEXT: addiw a0, a0, 1365
-; RV64I-NEXT: addiw a1, a1, 819
-; RV64I-NEXT: addiw a2, a2, -241
-; RV64I-NEXT: addiw a3, a3, 257
+; RV64I-NEXT: addi a0, a0, 1365
+; RV64I-NEXT: addi a1, a1, 819
+; RV64I-NEXT: addi a2, a2, -241
+; RV64I-NEXT: addi a3, a3, 257
; RV64I-NEXT: slli a4, a0, 32
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: slli a4, a1, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
index 230a327548795..6bf882fe47fef 100644
--- a/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
@@ -1338,10 +1338,10 @@ define <vscale x 1 x i64> @vp_cttz_nxv1i64(<vscale x 1 x i64> %va, <vscale x 1 x
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1426,10 +1426,10 @@ define <vscale x 1 x i64> @vp_cttz_nxv1i64_unmasked(<vscale x 1 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1516,10 +1516,10 @@ define <vscale x 2 x i64> @vp_cttz_nxv2i64(<vscale x 2 x i64> %va, <vscale x 2 x
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1604,10 +1604,10 @@ define <vscale x 2 x i64> @vp_cttz_nxv2i64_unmasked(<vscale x 2 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1694,10 +1694,10 @@ define <vscale x 4 x i64> @vp_cttz_nxv4i64(<vscale x 4 x i64> %va, <vscale x 4 x
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1782,10 +1782,10 @@ define <vscale x 4 x i64> @vp_cttz_nxv4i64_unmasked(<vscale x 4 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1872,10 +1872,10 @@ define <vscale x 7 x i64> @vp_cttz_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7 x
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1960,10 +1960,10 @@ define <vscale x 7 x i64> @vp_cttz_nxv7i64_unmasked(<vscale x 7 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2050,10 +2050,10 @@ define <vscale x 8 x i64> @vp_cttz_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8 x
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2138,10 +2138,10 @@ define <vscale x 8 x i64> @vp_cttz_nxv8i64_unmasked(<vscale x 8 x i64> %va, i32
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2334,10 +2334,10 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64-NEXT: lui a5, 4112
; RV64-NEXT: srli a6, a1, 3
; RV64-NEXT: sub a7, a0, a1
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a4, a4, -241
-; RV64-NEXT: addiw t0, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a4, a4, -241
+; RV64-NEXT: addi t0, a5, 257
; RV64-NEXT: vslidedown.vx v0, v0, a6
; RV64-NEXT: slli a6, a2, 32
; RV64-NEXT: add a6, a2, a6
@@ -2537,10 +2537,10 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64_unmasked(<vscale x 16 x i64> %va, i
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
; RV64-NEXT: sub a6, a0, a1
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a7, a4, -241
-; RV64-NEXT: addiw t0, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a7, a4, -241
+; RV64-NEXT: addi t0, a5, 257
; RV64-NEXT: slli a5, a2, 32
; RV64-NEXT: add a5, a2, a5
; RV64-NEXT: slli a4, a3, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll b/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll
index a9e129ef11a2c..22bb7bf317241 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll
@@ -797,7 +797,7 @@ define i32 @extractelt_sdiv_nxv4i32_splat(<vscale x 4 x i32> %x) {
; RV64M-NEXT: vsetivli zero, 1, e32, m1, ta, ma
; RV64M-NEXT: vmv.x.s a0, v8
; RV64M-NEXT: lui a1, 349525
-; RV64M-NEXT: addiw a1, a1, 1366
+; RV64M-NEXT: addi a1, a1, 1366
; RV64M-NEXT: mul a0, a0, a1
; RV64M-NEXT: srli a1, a0, 63
; RV64M-NEXT: srli a0, a0, 32
@@ -825,7 +825,7 @@ define i32 @extractelt_udiv_nxv4i32_splat(<vscale x 4 x i32> %x) {
; RV64M-NEXT: vsetivli zero, 1, e32, m1, ta, ma
; RV64M-NEXT: vmv.x.s a0, v8
; RV64M-NEXT: lui a1, 349525
-; RV64M-NEXT: addiw a1, a1, 1366
+; RV64M-NEXT: addi a1, a1, 1366
; RV64M-NEXT: mul a0, a0, a1
; RV64M-NEXT: srli a1, a0, 63
; RV64M-NEXT: srli a0, a0, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
index 6ab688ebbf663..3d83065009f28 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
@@ -911,9 +911,9 @@ define <2 x i64> @vp_bitreverse_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroext %e
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 209715
; RV64-NEXT: lui a7, 349525
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 819
-; RV64-NEXT: addiw a7, a7, 1365
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 819
+; RV64-NEXT: addi a7, a7, 1365
; RV64-NEXT: slli t0, a5, 32
; RV64-NEXT: add t0, a5, t0
; RV64-NEXT: slli a5, a6, 32
@@ -924,7 +924,7 @@ define <2 x i64> @vp_bitreverse_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroext %e
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vand.vx v9, v8, a1, v0.t
; RV64-NEXT: slli a3, a3, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v9, v9, 24, v0.t
; RV64-NEXT: vand.vx v10, v8, a3, v0.t
; RV64-NEXT: vsll.vi v10, v10, 8, v0.t
@@ -1048,7 +1048,7 @@ define <2 x i64> @vp_bitreverse_v2i64_unmasked(<2 x i64> %va, i32 zeroext %evl)
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vsrl.vi v9, v8, 24
; RV64-NEXT: vsrl.vi v10, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v11, v8, a3
; RV64-NEXT: vsrl.vx v12, v8, a5
; RV64-NEXT: vand.vx v12, v12, a0
@@ -1069,9 +1069,9 @@ define <2 x i64> @vp_bitreverse_v2i64_unmasked(<2 x i64> %va, i32 zeroext %evl)
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -1184,9 +1184,9 @@ define <4 x i64> @vp_bitreverse_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroext %e
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 209715
; RV64-NEXT: lui a7, 349525
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 819
-; RV64-NEXT: addiw a7, a7, 1365
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 819
+; RV64-NEXT: addi a7, a7, 1365
; RV64-NEXT: slli t0, a5, 32
; RV64-NEXT: add t0, a5, t0
; RV64-NEXT: slli a5, a6, 32
@@ -1197,7 +1197,7 @@ define <4 x i64> @vp_bitreverse_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroext %e
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vand.vx v10, v8, a1, v0.t
; RV64-NEXT: slli a3, a3, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v10, v10, 24, v0.t
; RV64-NEXT: vand.vx v12, v8, a3, v0.t
; RV64-NEXT: vsll.vi v12, v12, 8, v0.t
@@ -1321,7 +1321,7 @@ define <4 x i64> @vp_bitreverse_v4i64_unmasked(<4 x i64> %va, i32 zeroext %evl)
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vsrl.vi v12, v8, 24
; RV64-NEXT: vsrl.vi v14, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v10, v8, a3
; RV64-NEXT: vsrl.vx v16, v8, a5
; RV64-NEXT: vand.vx v16, v16, a0
@@ -1342,9 +1342,9 @@ define <4 x i64> @vp_bitreverse_v4i64_unmasked(<4 x i64> %va, i32 zeroext %evl)
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -1457,9 +1457,9 @@ define <8 x i64> @vp_bitreverse_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroext %e
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 209715
; RV64-NEXT: lui a7, 349525
-; RV64-NEXT: addiw a5, a5, -241
-; RV64-NEXT: addiw a6, a6, 819
-; RV64-NEXT: addiw a7, a7, 1365
+; RV64-NEXT: addi a5, a5, -241
+; RV64-NEXT: addi a6, a6, 819
+; RV64-NEXT: addi a7, a7, 1365
; RV64-NEXT: slli t0, a5, 32
; RV64-NEXT: add t0, a5, t0
; RV64-NEXT: slli a5, a6, 32
@@ -1470,7 +1470,7 @@ define <8 x i64> @vp_bitreverse_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroext %e
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vand.vx v12, v8, a1, v0.t
; RV64-NEXT: slli a3, a3, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v12, v12, 24, v0.t
; RV64-NEXT: vand.vx v16, v8, a3, v0.t
; RV64-NEXT: vsll.vi v16, v16, 8, v0.t
@@ -1594,7 +1594,7 @@ define <8 x i64> @vp_bitreverse_v8i64_unmasked(<8 x i64> %va, i32 zeroext %evl)
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vsrl.vi v16, v8, 24
; RV64-NEXT: vsrl.vi v20, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v12, v8, a3
; RV64-NEXT: vsrl.vx v24, v8, a5
; RV64-NEXT: vand.vx v24, v24, a0
@@ -1615,9 +1615,9 @@ define <8 x i64> @vp_bitreverse_v8i64_unmasked(<8 x i64> %va, i32 zeroext %evl)
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -1787,7 +1787,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -1814,9 +1814,9 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -1962,7 +1962,7 @@ define <15 x i64> @vp_bitreverse_v15i64_unmasked(<15 x i64> %va, i32 zeroext %ev
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -1986,9 +1986,9 @@ define <15 x i64> @vp_bitreverse_v15i64_unmasked(<15 x i64> %va, i32 zeroext %ev
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2166,7 +2166,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -2193,9 +2193,9 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
@@ -2341,7 +2341,7 @@ define <16 x i64> @vp_bitreverse_v16i64_unmasked(<16 x i64> %va, i32 zeroext %ev
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -2365,9 +2365,9 @@ define <16 x i64> @vp_bitreverse_v16i64_unmasked(<16 x i64> %va, i32 zeroext %ev
; RV64-NEXT: lui a0, 61681
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 349525
-; RV64-NEXT: addiw a0, a0, -241
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, 1365
+; RV64-NEXT: addi a0, a0, -241
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, 1365
; RV64-NEXT: slli a3, a0, 32
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: add a0, a0, a3
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse.ll
index 946ca4d1ab904..6d9793c12153e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse.ll
@@ -188,7 +188,7 @@ define void @bitreverse_v2i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a3, 16
; RV64-NEXT: lui a4, 4080
; RV64-NEXT: li a5, 255
-; RV64-NEXT: addiw a3, a3, -256
+; RV64-NEXT: addi a3, a3, -256
; RV64-NEXT: slli a5, a5, 24
; RV64-NEXT: vsrl.vx v9, v8, a1
; RV64-NEXT: vsrl.vx v10, v8, a2
@@ -211,9 +211,9 @@ define void @bitreverse_v2i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a1, 61681
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 349525
-; RV64-NEXT: addiw a1, a1, -241
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, 1365
+; RV64-NEXT: addi a1, a1, -241
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, 1365
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: slli a5, a2, 32
; RV64-NEXT: add a1, a1, a4
@@ -440,7 +440,7 @@ define void @bitreverse_v4i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a3, 16
; RV64-NEXT: lui a4, 4080
; RV64-NEXT: li a5, 255
-; RV64-NEXT: addiw a3, a3, -256
+; RV64-NEXT: addi a3, a3, -256
; RV64-NEXT: slli a5, a5, 24
; RV64-NEXT: vsrl.vx v8, v14, a1
; RV64-NEXT: vsrl.vx v10, v14, a2
@@ -463,9 +463,9 @@ define void @bitreverse_v4i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a1, 61681
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 349525
-; RV64-NEXT: addiw a1, a1, -241
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, 1365
+; RV64-NEXT: addi a1, a1, -241
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, 1365
; RV64-NEXT: slli a4, a1, 32
; RV64-NEXT: slli a5, a2, 32
; RV64-NEXT: add a1, a1, a4
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll
index 4eb189d132787..b7ca932bb1c45 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll
@@ -331,7 +331,7 @@ define <2 x i64> @vp_bswap_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vand.vx v9, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v9, v9, 24, v0.t
; RV64-NEXT: vand.vx v10, v8, a2, v0.t
; RV64-NEXT: vsll.vi v10, v10, 8, v0.t
@@ -410,7 +410,7 @@ define <2 x i64> @vp_bswap_v2i64_unmasked(<2 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64-NEXT: vsrl.vi v9, v8, 24
; RV64-NEXT: vsrl.vi v10, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v11, v8, a3
; RV64-NEXT: vsrl.vx v12, v8, a5
; RV64-NEXT: vand.vx v12, v12, a0
@@ -491,7 +491,7 @@ define <4 x i64> @vp_bswap_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vand.vx v10, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v10, v10, 24, v0.t
; RV64-NEXT: vand.vx v12, v8, a2, v0.t
; RV64-NEXT: vsll.vi v12, v12, 8, v0.t
@@ -570,7 +570,7 @@ define <4 x i64> @vp_bswap_v4i64_unmasked(<4 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: vsetvli zero, a0, e64, m2, ta, ma
; RV64-NEXT: vsrl.vi v10, v8, 24
; RV64-NEXT: vsrl.vi v12, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v14, v8, a3
; RV64-NEXT: vsrl.vx v16, v8, a5
; RV64-NEXT: vand.vx v16, v16, a0
@@ -651,7 +651,7 @@ define <8 x i64> @vp_bswap_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vand.vx v12, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v12, v12, 24, v0.t
; RV64-NEXT: vand.vx v16, v8, a2, v0.t
; RV64-NEXT: vsll.vi v16, v16, 8, v0.t
@@ -730,7 +730,7 @@ define <8 x i64> @vp_bswap_v8i64_unmasked(<8 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: vsetvli zero, a0, e64, m4, ta, ma
; RV64-NEXT: vsrl.vi v12, v8, 24
; RV64-NEXT: vsrl.vi v16, v8, 8
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v20, v8, a3
; RV64-NEXT: vsrl.vx v24, v8, a5
; RV64-NEXT: vand.vx v24, v24, a0
@@ -866,7 +866,7 @@ define <15 x i64> @vp_bswap_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -984,7 +984,7 @@ define <15 x i64> @vp_bswap_v15i64_unmasked(<15 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
@@ -1131,7 +1131,7 @@ define <16 x i64> @vp_bswap_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vand.vx v16, v8, a1, v0.t
; RV64-NEXT: slli a2, a2, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsll.vi v16, v16, 24, v0.t
; RV64-NEXT: vand.vx v24, v8, a2, v0.t
; RV64-NEXT: vsll.vi v24, v24, 8, v0.t
@@ -1249,7 +1249,7 @@ define <16 x i64> @vp_bswap_v16i64_unmasked(<16 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: li a5, 40
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsrl.vi v24, v8, 24
-; RV64-NEXT: addiw a0, a4, -256
+; RV64-NEXT: addi a0, a4, -256
; RV64-NEXT: vsrl.vx v16, v8, a3
; RV64-NEXT: vsrl.vx v0, v8, a5
; RV64-NEXT: vand.vx v0, v0, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap.ll
index 5e491f21e6213..5b823442c8b04 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap.ll
@@ -116,7 +116,7 @@ define void @bswap_v2i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a3, 16
; RV64-NEXT: lui a4, 4080
; RV64-NEXT: li a5, 255
-; RV64-NEXT: addiw a3, a3, -256
+; RV64-NEXT: addi a3, a3, -256
; RV64-NEXT: slli a5, a5, 24
; RV64-NEXT: vsrl.vx v9, v8, a1
; RV64-NEXT: vsrl.vx v10, v8, a2
@@ -269,7 +269,7 @@ define void @bswap_v4i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a3, 16
; RV64-NEXT: lui a4, 4080
; RV64-NEXT: li a5, 255
-; RV64-NEXT: addiw a3, a3, -256
+; RV64-NEXT: addi a3, a3, -256
; RV64-NEXT: slli a5, a5, 24
; RV64-NEXT: vsrl.vx v10, v8, a1
; RV64-NEXT: vsrl.vx v12, v8, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
index 8adf87bb27d02..2250ab5bd0bbe 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
@@ -931,10 +931,10 @@ define <2 x i64> @vp_ctlz_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1035,10 +1035,10 @@ define <2 x i64> @vp_ctlz_v2i64_unmasked(<2 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1141,10 +1141,10 @@ define <4 x i64> @vp_ctlz_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1245,10 +1245,10 @@ define <4 x i64> @vp_ctlz_v4i64_unmasked(<4 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1351,10 +1351,10 @@ define <8 x i64> @vp_ctlz_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1455,10 +1455,10 @@ define <8 x i64> @vp_ctlz_v8i64_unmasked(<8 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1561,10 +1561,10 @@ define <15 x i64> @vp_ctlz_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %evl
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1665,10 +1665,10 @@ define <15 x i64> @vp_ctlz_v15i64_unmasked(<15 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1771,10 +1771,10 @@ define <16 x i64> @vp_ctlz_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %evl
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -1875,10 +1875,10 @@ define <16 x i64> @vp_ctlz_v16i64_unmasked(<16 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -2101,10 +2101,10 @@ define <32 x i64> @vp_ctlz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV64-NEXT: lui a3, 209715
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a6, a4, -241
-; RV64-NEXT: addiw a7, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a6, a4, -241
+; RV64-NEXT: addi a7, a5, 257
; RV64-NEXT: slli a5, a2, 32
; RV64-NEXT: add a5, a2, a5
; RV64-NEXT: slli a4, a3, 32
@@ -2321,10 +2321,10 @@ define <32 x i64> @vp_ctlz_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a4, 209715
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 4112
-; RV64-NEXT: addiw a7, a3, 1365
-; RV64-NEXT: addiw a3, a4, 819
-; RV64-NEXT: addiw a4, a5, -241
-; RV64-NEXT: addiw a6, a6, 257
+; RV64-NEXT: addi a7, a3, 1365
+; RV64-NEXT: addi a3, a4, 819
+; RV64-NEXT: addi a4, a5, -241
+; RV64-NEXT: addi a6, a6, 257
; RV64-NEXT: slli a5, a7, 32
; RV64-NEXT: add a7, a7, a5
; RV64-NEXT: slli a5, a3, 32
@@ -3316,10 +3316,10 @@ define <2 x i64> @vp_ctlz_zero_undef_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroe
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -3420,10 +3420,10 @@ define <2 x i64> @vp_ctlz_zero_undef_v2i64_unmasked(<2 x i64> %va, i32 zeroext %
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -3524,10 +3524,10 @@ define <4 x i64> @vp_ctlz_zero_undef_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroe
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -3628,10 +3628,10 @@ define <4 x i64> @vp_ctlz_zero_undef_v4i64_unmasked(<4 x i64> %va, i32 zeroext %
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -3732,10 +3732,10 @@ define <8 x i64> @vp_ctlz_zero_undef_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroe
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -3836,10 +3836,10 @@ define <8 x i64> @vp_ctlz_zero_undef_v8i64_unmasked(<8 x i64> %va, i32 zeroext %
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -3940,10 +3940,10 @@ define <15 x i64> @vp_ctlz_zero_undef_v15i64(<15 x i64> %va, <15 x i1> %m, i32 z
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -4044,10 +4044,10 @@ define <15 x i64> @vp_ctlz_zero_undef_v15i64_unmasked(<15 x i64> %va, i32 zeroex
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -4148,10 +4148,10 @@ define <16 x i64> @vp_ctlz_zero_undef_v16i64(<16 x i64> %va, <16 x i1> %m, i32 z
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -4252,10 +4252,10 @@ define <16 x i64> @vp_ctlz_zero_undef_v16i64_unmasked(<16 x i64> %va, i32 zeroex
; RV64-NEXT: lui a1, 209715
; RV64-NEXT: lui a2, 61681
; RV64-NEXT: lui a3, 4112
-; RV64-NEXT: addiw a0, a0, 1365
-; RV64-NEXT: addiw a1, a1, 819
-; RV64-NEXT: addiw a2, a2, -241
-; RV64-NEXT: addiw a3, a3, 257
+; RV64-NEXT: addi a0, a0, 1365
+; RV64-NEXT: addi a1, a1, 819
+; RV64-NEXT: addi a2, a2, -241
+; RV64-NEXT: addi a3, a3, 257
; RV64-NEXT: slli a4, a0, 32
; RV64-NEXT: add a0, a0, a4
; RV64-NEXT: slli a4, a1, 32
@@ -4476,10 +4476,10 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV64-NEXT: lui a3, 209715
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a6, a4, -241
-; RV64-NEXT: addiw a7, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a6, a4, -241
+; RV64-NEXT: addi a7, a5, 257
; RV64-NEXT: slli a5, a2, 32
; RV64-NEXT: add a5, a2, a5
; RV64-NEXT: slli a4, a3, 32
@@ -4696,10 +4696,10 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64_unmasked(<32 x i64> %va, i32 zeroex
; RV64-NEXT: lui a4, 209715
; RV64-NEXT: lui a5, 61681
; RV64-NEXT: lui a6, 4112
-; RV64-NEXT: addiw a7, a3, 1365
-; RV64-NEXT: addiw a3, a4, 819
-; RV64-NEXT: addiw a4, a5, -241
-; RV64-NEXT: addiw a6, a6, 257
+; RV64-NEXT: addi a7, a3, 1365
+; RV64-NEXT: addi a3, a4, 819
+; RV64-NEXT: addi a4, a5, -241
+; RV64-NEXT: addi a6, a6, 257
; RV64-NEXT: slli a5, a7, 32
; RV64-NEXT: add a7, a7, a5
; RV64-NEXT: slli a5, a3, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz.ll
index 829cc9bac7ed3..61730b87c5517 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz.ll
@@ -305,10 +305,10 @@ define void @ctlz_v2i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
@@ -696,10 +696,10 @@ define void @ctlz_v4i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
@@ -1068,10 +1068,10 @@ define void @ctlz_zero_undef_v2i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
@@ -1438,10 +1438,10 @@ define void @ctlz_zero_undef_v4i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
index 7a0897c9da416..94fecbdfde18e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
@@ -699,10 +699,10 @@ define <2 x i64> @vp_ctpop_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -775,10 +775,10 @@ define <2 x i64> @vp_ctpop_v2i64_unmasked(<2 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -853,10 +853,10 @@ define <4 x i64> @vp_ctpop_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -929,10 +929,10 @@ define <4 x i64> @vp_ctpop_v4i64_unmasked(<4 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1007,10 +1007,10 @@ define <8 x i64> @vp_ctpop_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1083,10 +1083,10 @@ define <8 x i64> @vp_ctpop_v8i64_unmasked(<8 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1161,10 +1161,10 @@ define <15 x i64> @vp_ctpop_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1237,10 +1237,10 @@ define <15 x i64> @vp_ctpop_v15i64_unmasked(<15 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1315,10 +1315,10 @@ define <16 x i64> @vp_ctpop_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1391,10 +1391,10 @@ define <16 x i64> @vp_ctpop_v16i64_unmasked(<16 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1577,10 +1577,10 @@ define <32 x i64> @vp_ctpop_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %ev
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1708,10 +1708,10 @@ define <32 x i64> @vp_ctpop_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a3, 209715
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a4, a4, -241
-; RV64-NEXT: addiw a5, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a4, a4, -241
+; RV64-NEXT: addi a5, a5, 257
; RV64-NEXT: slli a6, a2, 32
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop.ll
index 4fbe67cfcd642..44b9331fd2caf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop.ll
@@ -173,10 +173,10 @@ define void @ctpop_v2i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -479,10 +479,10 @@ define void @ctpop_v4i64(ptr %x, ptr %y) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll
index 6cee5a6d24ad1..bdce00b10e5a7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll
@@ -774,10 +774,10 @@ define <2 x i64> @vp_cttz_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -856,10 +856,10 @@ define <2 x i64> @vp_cttz_v2i64_unmasked(<2 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -940,10 +940,10 @@ define <4 x i64> @vp_cttz_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1022,10 +1022,10 @@ define <4 x i64> @vp_cttz_v4i64_unmasked(<4 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1106,10 +1106,10 @@ define <8 x i64> @vp_cttz_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1188,10 +1188,10 @@ define <8 x i64> @vp_cttz_v8i64_unmasked(<8 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1272,10 +1272,10 @@ define <15 x i64> @vp_cttz_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %evl
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1354,10 +1354,10 @@ define <15 x i64> @vp_cttz_v15i64_unmasked(<15 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1438,10 +1438,10 @@ define <16 x i64> @vp_cttz_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %evl
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1520,10 +1520,10 @@ define <16 x i64> @vp_cttz_v16i64_unmasked(<16 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -1715,10 +1715,10 @@ define <32 x i64> @vp_cttz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a5, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a5, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a3, a1, 32
; RV64-NEXT: add a6, a1, a3
; RV64-NEXT: slli a3, a2, 32
@@ -1884,10 +1884,10 @@ define <32 x i64> @vp_cttz_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV64-NEXT: lui a3, 209715
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a4, a4, -241
-; RV64-NEXT: addiw a5, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a4, a4, -241
+; RV64-NEXT: addi a5, a5, 257
; RV64-NEXT: slli a6, a2, 32
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
@@ -2689,10 +2689,10 @@ define <2 x i64> @vp_cttz_zero_undef_v2i64(<2 x i64> %va, <2 x i1> %m, i32 zeroe
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2771,10 +2771,10 @@ define <2 x i64> @vp_cttz_zero_undef_v2i64_unmasked(<2 x i64> %va, i32 zeroext %
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2853,10 +2853,10 @@ define <4 x i64> @vp_cttz_zero_undef_v4i64(<4 x i64> %va, <4 x i1> %m, i32 zeroe
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -2935,10 +2935,10 @@ define <4 x i64> @vp_cttz_zero_undef_v4i64_unmasked(<4 x i64> %va, i32 zeroext %
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3017,10 +3017,10 @@ define <8 x i64> @vp_cttz_zero_undef_v8i64(<8 x i64> %va, <8 x i1> %m, i32 zeroe
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3099,10 +3099,10 @@ define <8 x i64> @vp_cttz_zero_undef_v8i64_unmasked(<8 x i64> %va, i32 zeroext %
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3181,10 +3181,10 @@ define <15 x i64> @vp_cttz_zero_undef_v15i64(<15 x i64> %va, <15 x i1> %m, i32 z
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3263,10 +3263,10 @@ define <15 x i64> @vp_cttz_zero_undef_v15i64_unmasked(<15 x i64> %va, i32 zeroex
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3345,10 +3345,10 @@ define <16 x i64> @vp_cttz_zero_undef_v16i64(<16 x i64> %va, <16 x i1> %m, i32 z
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3427,10 +3427,10 @@ define <16 x i64> @vp_cttz_zero_undef_v16i64_unmasked(<16 x i64> %va, i32 zeroex
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a3, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a3, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a5, a1, 32
; RV64-NEXT: add a1, a1, a5
; RV64-NEXT: slli a5, a2, 32
@@ -3620,10 +3620,10 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV64-NEXT: lui a2, 209715
; RV64-NEXT: lui a3, 61681
; RV64-NEXT: lui a4, 4112
-; RV64-NEXT: addiw a1, a1, 1365
-; RV64-NEXT: addiw a2, a2, 819
-; RV64-NEXT: addiw a5, a3, -241
-; RV64-NEXT: addiw a4, a4, 257
+; RV64-NEXT: addi a1, a1, 1365
+; RV64-NEXT: addi a2, a2, 819
+; RV64-NEXT: addi a5, a3, -241
+; RV64-NEXT: addi a4, a4, 257
; RV64-NEXT: slli a3, a1, 32
; RV64-NEXT: add a6, a1, a3
; RV64-NEXT: slli a3, a2, 32
@@ -3789,10 +3789,10 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64_unmasked(<32 x i64> %va, i32 zeroex
; RV64-NEXT: lui a3, 209715
; RV64-NEXT: lui a4, 61681
; RV64-NEXT: lui a5, 4112
-; RV64-NEXT: addiw a2, a2, 1365
-; RV64-NEXT: addiw a3, a3, 819
-; RV64-NEXT: addiw a4, a4, -241
-; RV64-NEXT: addiw a5, a5, 257
+; RV64-NEXT: addi a2, a2, 1365
+; RV64-NEXT: addi a3, a3, 819
+; RV64-NEXT: addi a4, a4, -241
+; RV64-NEXT: addi a5, a5, 257
; RV64-NEXT: slli a6, a2, 32
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz.ll
index d884cece89507..307b143f4449f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz.ll
@@ -294,10 +294,10 @@ define void @cttz_v2i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
@@ -671,10 +671,10 @@ define void @cttz_v4i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
@@ -1025,10 +1025,10 @@ define void @cttz_zero_undef_v2i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
@@ -1373,10 +1373,10 @@ define void @cttz_zero_undef_v4i64(ptr %x, ptr %y) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw a1, a1, 1365
-; RV64I-NEXT: addiw a2, a2, 819
-; RV64I-NEXT: addiw a3, a3, -241
-; RV64I-NEXT: addiw a4, a4, 257
+; RV64I-NEXT: addi a1, a1, 1365
+; RV64I-NEXT: addi a2, a2, 819
+; RV64I-NEXT: addi a3, a3, -241
+; RV64I-NEXT: addi a4, a4, 257
; RV64I-NEXT: slli a5, a1, 32
; RV64I-NEXT: add a1, a1, a5
; RV64I-NEXT: slli a5, a2, 32
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract.ll
index 75732fe2f7e65..dba5d26c216fa 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-extract.ll
@@ -1368,7 +1368,7 @@ define i32 @extractelt_sdiv_v4i32(<4 x i32> %x) {
; RV64M-NEXT: vslidedown.vi v8, v8, 2
; RV64M-NEXT: lui a0, 322639
; RV64M-NEXT: vmv.x.s a1, v8
-; RV64M-NEXT: addiw a0, a0, -945
+; RV64M-NEXT: addi a0, a0, -945
; RV64M-NEXT: mul a0, a1, a0
; RV64M-NEXT: srli a1, a0, 63
; RV64M-NEXT: srai a0, a0, 34
@@ -1381,7 +1381,7 @@ define i32 @extractelt_sdiv_v4i32(<4 x i32> %x) {
; VISNI-NEXT: ri.vextract.x.v a0, v8, 2
; VISNI-NEXT: lui a1, 322639
; VISNI-NEXT: sext.w a0, a0
-; VISNI-NEXT: addiw a1, a1, -945
+; VISNI-NEXT: addi a1, a1, -945
; VISNI-NEXT: mul a0, a0, a1
; VISNI-NEXT: srli a1, a0, 63
; VISNI-NEXT: srai a0, a0, 34
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
index aa55bd7af59c5..94bf16e6fe046 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
@@ -668,7 +668,7 @@ define void @buildvec_seq2_v16i8_v2i64(ptr %x) {
; RV64V-LABEL: buildvec_seq2_v16i8_v2i64:
; RV64V: # %bb.0:
; RV64V-NEXT: lui a1, 528432
-; RV64V-NEXT: addiw a1, a1, 513
+; RV64V-NEXT: addi a1, a1, 513
; RV64V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; RV64V-NEXT: vmv.v.x v8, a1
; RV64V-NEXT: vsetivli zero, 16, e8, m1, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int.ll
index a9adc87d29c8b..0c30cbe4a42ef 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int.ll
@@ -1207,8 +1207,8 @@ define void @mulhu_v2i64(ptr %x) {
; RV64-NEXT: vle64.v v8, (a0)
; RV64-NEXT: lui a1, 838861
; RV64-NEXT: lui a2, 699051
-; RV64-NEXT: addiw a1, a1, -819
-; RV64-NEXT: addiw a2, a2, -1365
+; RV64-NEXT: addi a1, a1, -819
+; RV64-NEXT: addi a2, a2, -1365
; RV64-NEXT: slli a3, a1, 32
; RV64-NEXT: add a1, a1, a3
; RV64-NEXT: slli a3, a2, 32
@@ -1368,7 +1368,7 @@ define void @mulhs_v2i64(ptr %x) {
; RV64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; RV64-NEXT: vle64.v v8, (a0)
; RV64-NEXT: lui a1, 349525
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: slli a2, a1, 32
; RV64-NEXT: add a1, a1, a2
; RV64-NEXT: lui a2, %hi(.LCPI74_0)
@@ -3501,7 +3501,7 @@ define void @mulhs_v4i64(ptr %x) {
; RV64-NEXT: vle64.v v8, (a0)
; RV64-NEXT: lui a1, 349525
; RV64-NEXT: lui a2, 1044496
-; RV64-NEXT: addiw a1, a1, 1365
+; RV64-NEXT: addi a1, a1, 1365
; RV64-NEXT: addi a2, a2, -256
; RV64-NEXT: vmv.s.x v14, a2
; RV64-NEXT: slli a2, a1, 32
@@ -5527,7 +5527,7 @@ define void @mulhu_vx_v2i64(ptr %x) {
; RV64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
; RV64-NEXT: vle64.v v8, (a0)
; RV64-NEXT: lui a1, 699051
-; RV64-NEXT: addiw a1, a1, -1365
+; RV64-NEXT: addi a1, a1, -1365
; RV64-NEXT: slli a2, a1, 32
; RV64-NEXT: add a1, a1, a2
; RV64-NEXT: vmulhu.vx v8, v8, a1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
index 76eca8e034303..c5986e9d75ab4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
@@ -1128,7 +1128,7 @@ define <2 x i64> @mgather_v2i16_zextload_v2i64(<2 x ptr> %ptrs, <2 x i1> %m, <2
; RV64ZVE32F-NEXT: vmv.x.s a0, v8
; RV64ZVE32F-NEXT: lui a1, 16
; RV64ZVE32F-NEXT: vslidedown.vi v8, v8, 1
-; RV64ZVE32F-NEXT: addiw a1, a1, -1
+; RV64ZVE32F-NEXT: addi a1, a1, -1
; RV64ZVE32F-NEXT: vmv.x.s a2, v8
; RV64ZVE32F-NEXT: and a0, a0, a1
; RV64ZVE32F-NEXT: and a1, a2, a1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-sat-clip.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-sat-clip.ll
index 4e367bb0d70cd..3b1dc298c12ce 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-sat-clip.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-sat-clip.ll
@@ -268,7 +268,7 @@ define void @trunc_sat_i32i64_notopt(ptr %x, ptr %y) {
; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-NEXT: vle64.v v8, (a0)
; CHECK-NEXT: lui a0, 524288
-; CHECK-NEXT: addiw a0, a0, 1
+; CHECK-NEXT: addi a0, a0, 1
; CHECK-NEXT: vmax.vx v8, v8, a0
; CHECK-NEXT: li a0, 1
; CHECK-NEXT: slli a0, a0, 31
@@ -627,7 +627,7 @@ define void @trunc_sat_u16u64_notopt(ptr %x, ptr %y) {
; CHECK-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-NEXT: vle64.v v8, (a0)
; CHECK-NEXT: lui a0, 8
-; CHECK-NEXT: addiw a0, a0, -1
+; CHECK-NEXT: addi a0, a0, -1
; CHECK-NEXT: vminu.vx v8, v8, a0
; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-NEXT: vnsrl.wi v10, v8, 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-zvqdotq.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-zvqdotq.ll
index c657c0337206a..4d994343cdc75 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-zvqdotq.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-zvqdotq.ll
@@ -230,29 +230,17 @@ define i32 @reduce_of_sext(<16 x i8> %a) {
; NODOT-NEXT: vmv.x.s a0, v8
; NODOT-NEXT: ret
;
-; DOT32-LABEL: reduce_of_sext:
-; DOT32: # %bb.0: # %entry
-; DOT32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; DOT32-NEXT: vmv.v.i v9, 0
-; DOT32-NEXT: lui a0, 4112
-; DOT32-NEXT: addi a0, a0, 257
-; DOT32-NEXT: vqdot.vx v9, v8, a0
-; DOT32-NEXT: vmv.s.x v8, zero
-; DOT32-NEXT: vredsum.vs v8, v9, v8
-; DOT32-NEXT: vmv.x.s a0, v8
-; DOT32-NEXT: ret
-;
-; DOT64-LABEL: reduce_of_sext:
-; DOT64: # %bb.0: # %entry
-; DOT64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; DOT64-NEXT: vmv.v.i v9, 0
-; DOT64-NEXT: lui a0, 4112
-; DOT64-NEXT: addiw a0, a0, 257
-; DOT64-NEXT: vqdot.vx v9, v8, a0
-; DOT64-NEXT: vmv.s.x v8, zero
-; DOT64-NEXT: vredsum.vs v8, v9, v8
-; DOT64-NEXT: vmv.x.s a0, v8
-; DOT64-NEXT: ret
+; DOT-LABEL: reduce_of_sext:
+; DOT: # %bb.0: # %entry
+; DOT-NEXT: vsetivli zero, 4, e32, m1, ta, ma
+; DOT-NEXT: vmv.v.i v9, 0
+; DOT-NEXT: lui a0, 4112
+; DOT-NEXT: addi a0, a0, 257
+; DOT-NEXT: vqdot.vx v9, v8, a0
+; DOT-NEXT: vmv.s.x v8, zero
+; DOT-NEXT: vredsum.vs v8, v9, v8
+; DOT-NEXT: vmv.x.s a0, v8
+; DOT-NEXT: ret
entry:
%a.ext = sext <16 x i8> %a to <16 x i32>
%res = tail call i32 @llvm.vector.reduce.add.v16i32(<16 x i32> %a.ext)
@@ -269,29 +257,17 @@ define i32 @reduce_of_zext(<16 x i8> %a) {
; NODOT-NEXT: vmv.x.s a0, v8
; NODOT-NEXT: ret
;
-; DOT32-LABEL: reduce_of_zext:
-; DOT32: # %bb.0: # %entry
-; DOT32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; DOT32-NEXT: vmv.v.i v9, 0
-; DOT32-NEXT: lui a0, 4112
-; DOT32-NEXT: addi a0, a0, 257
-; DOT32-NEXT: vqdotu.vx v9, v8, a0
-; DOT32-NEXT: vmv.s.x v8, zero
-; DOT32-NEXT: vredsum.vs v8, v9, v8
-; DOT32-NEXT: vmv.x.s a0, v8
-; DOT32-NEXT: ret
-;
-; DOT64-LABEL: reduce_of_zext:
-; DOT64: # %bb.0: # %entry
-; DOT64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; DOT64-NEXT: vmv.v.i v9, 0
-; DOT64-NEXT: lui a0, 4112
-; DOT64-NEXT: addiw a0, a0, 257
-; DOT64-NEXT: vqdotu.vx v9, v8, a0
-; DOT64-NEXT: vmv.s.x v8, zero
-; DOT64-NEXT: vredsum.vs v8, v9, v8
-; DOT64-NEXT: vmv.x.s a0, v8
-; DOT64-NEXT: ret
+; DOT-LABEL: reduce_of_zext:
+; DOT: # %bb.0: # %entry
+; DOT-NEXT: vsetivli zero, 4, e32, m1, ta, ma
+; DOT-NEXT: vmv.v.i v9, 0
+; DOT-NEXT: lui a0, 4112
+; DOT-NEXT: addi a0, a0, 257
+; DOT-NEXT: vqdotu.vx v9, v8, a0
+; DOT-NEXT: vmv.s.x v8, zero
+; DOT-NEXT: vredsum.vs v8, v9, v8
+; DOT-NEXT: vmv.x.s a0, v8
+; DOT-NEXT: ret
entry:
%a.ext = zext <16 x i8> %a to <16 x i32>
%res = tail call i32 @llvm.vector.reduce.add.v16i32(<16 x i32> %a.ext)
@@ -965,3 +941,6 @@ entry:
%res = call <16 x i32> @llvm.experimental.vector.partial.reduce.add.nvx8i32.nvx16i32.nvx16i32(<16 x i32> %mul, <16 x i32> zeroinitializer)
ret <16 x i32> %res
}
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; DOT32: {{.*}}
+; DOT64: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll b/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
index 0640a6f3af257..f9ac53b76ebaf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
@@ -897,7 +897,7 @@ define <2 x i16> @stest_f64i16(<2 x double> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.d a1, fa1, rtz
; CHECK-NOV-NEXT: lui a2, 8
-; CHECK-NOV-NEXT: addiw a2, a2, -1
+; CHECK-NOV-NEXT: addi a2, a2, -1
; CHECK-NOV-NEXT: fcvt.w.d a0, fa0, rtz
; CHECK-NOV-NEXT: bge a1, a2, .LBB9_5
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -945,7 +945,7 @@ define <2 x i16> @utest_f64i16(<2 x double> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.wu.d a0, fa0, rtz
; CHECK-NOV-NEXT: lui a2, 16
-; CHECK-NOV-NEXT: addiw a2, a2, -1
+; CHECK-NOV-NEXT: addi a2, a2, -1
; CHECK-NOV-NEXT: fcvt.wu.d a1, fa1, rtz
; CHECK-NOV-NEXT: bgeu a0, a2, .LBB10_3
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -979,7 +979,7 @@ define <2 x i16> @ustest_f64i16(<2 x double> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.d a1, fa1, rtz
; CHECK-NOV-NEXT: lui a2, 16
-; CHECK-NOV-NEXT: addiw a2, a2, -1
+; CHECK-NOV-NEXT: addi a2, a2, -1
; CHECK-NOV-NEXT: fcvt.w.d a0, fa0, rtz
; CHECK-NOV-NEXT: blt a1, a2, .LBB11_2
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -1020,7 +1020,7 @@ define <4 x i16> @stest_f32i16(<4 x float> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.s a1, fa3, rtz
; CHECK-NOV-NEXT: lui a5, 8
-; CHECK-NOV-NEXT: addiw a5, a5, -1
+; CHECK-NOV-NEXT: addi a5, a5, -1
; CHECK-NOV-NEXT: fcvt.w.s a2, fa2, rtz
; CHECK-NOV-NEXT: bge a1, a5, .LBB12_10
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -1096,7 +1096,7 @@ define <4 x i16> @utest_f32i16(<4 x float> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.wu.s a1, fa0, rtz
; CHECK-NOV-NEXT: lui a3, 16
-; CHECK-NOV-NEXT: addiw a3, a3, -1
+; CHECK-NOV-NEXT: addi a3, a3, -1
; CHECK-NOV-NEXT: fcvt.wu.s a2, fa1, rtz
; CHECK-NOV-NEXT: bgeu a1, a3, .LBB13_6
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -1148,7 +1148,7 @@ define <4 x i16> @ustest_f32i16(<4 x float> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.s a1, fa3, rtz
; CHECK-NOV-NEXT: lui a4, 16
-; CHECK-NOV-NEXT: addiw a4, a4, -1
+; CHECK-NOV-NEXT: addi a4, a4, -1
; CHECK-NOV-NEXT: fcvt.w.s a2, fa2, rtz
; CHECK-NOV-NEXT: bge a1, a4, .LBB14_6
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -1283,7 +1283,7 @@ define <8 x i16> @stest_f16i16(<8 x half> %x) {
; CHECK-NOV-NEXT: call __extendhfsf2
; CHECK-NOV-NEXT: fcvt.l.s a0, fa0, rtz
; CHECK-NOV-NEXT: lui a7, 8
-; CHECK-NOV-NEXT: addiw a7, a7, -1
+; CHECK-NOV-NEXT: addi a7, a7, -1
; CHECK-NOV-NEXT: bge a0, a7, .LBB15_18
; CHECK-NOV-NEXT: # %bb.1: # %entry
; CHECK-NOV-NEXT: fcvt.l.s a1, fs5, rtz
@@ -1667,7 +1667,7 @@ define <8 x i16> @utesth_f16i16(<8 x half> %x) {
; CHECK-NOV-NEXT: call __extendhfsf2
; CHECK-NOV-NEXT: fcvt.lu.s a0, fa0, rtz
; CHECK-NOV-NEXT: lui a3, 16
-; CHECK-NOV-NEXT: addiw a3, a3, -1
+; CHECK-NOV-NEXT: addi a3, a3, -1
; CHECK-NOV-NEXT: bgeu a0, a3, .LBB16_10
; CHECK-NOV-NEXT: # %bb.1: # %entry
; CHECK-NOV-NEXT: fcvt.lu.s a1, fs5, rtz
@@ -2007,7 +2007,7 @@ define <8 x i16> @ustest_f16i16(<8 x half> %x) {
; CHECK-NOV-NEXT: call __extendhfsf2
; CHECK-NOV-NEXT: fcvt.l.s a0, fa0, rtz
; CHECK-NOV-NEXT: lui a4, 16
-; CHECK-NOV-NEXT: addiw a4, a4, -1
+; CHECK-NOV-NEXT: addi a4, a4, -1
; CHECK-NOV-NEXT: bge a0, a4, .LBB17_10
; CHECK-NOV-NEXT: # %bb.1: # %entry
; CHECK-NOV-NEXT: fcvt.l.s a1, fs5, rtz
@@ -4449,7 +4449,7 @@ define <2 x i16> @stest_f64i16_mm(<2 x double> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.d a1, fa1, rtz
; CHECK-NOV-NEXT: lui a2, 8
-; CHECK-NOV-NEXT: addiw a2, a2, -1
+; CHECK-NOV-NEXT: addi a2, a2, -1
; CHECK-NOV-NEXT: fcvt.w.d a0, fa0, rtz
; CHECK-NOV-NEXT: bge a1, a2, .LBB36_5
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -4495,7 +4495,7 @@ define <2 x i16> @utest_f64i16_mm(<2 x double> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.wu.d a0, fa0, rtz
; CHECK-NOV-NEXT: lui a2, 16
-; CHECK-NOV-NEXT: addiw a2, a2, -1
+; CHECK-NOV-NEXT: addi a2, a2, -1
; CHECK-NOV-NEXT: fcvt.wu.d a1, fa1, rtz
; CHECK-NOV-NEXT: bgeu a0, a2, .LBB37_3
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -4528,7 +4528,7 @@ define <2 x i16> @ustest_f64i16_mm(<2 x double> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.d a1, fa1, rtz
; CHECK-NOV-NEXT: lui a2, 16
-; CHECK-NOV-NEXT: addiw a2, a2, -1
+; CHECK-NOV-NEXT: addi a2, a2, -1
; CHECK-NOV-NEXT: fcvt.w.d a0, fa0, rtz
; CHECK-NOV-NEXT: blt a1, a2, .LBB38_2
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -4567,7 +4567,7 @@ define <4 x i16> @stest_f32i16_mm(<4 x float> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.s a1, fa3, rtz
; CHECK-NOV-NEXT: lui a5, 8
-; CHECK-NOV-NEXT: addiw a5, a5, -1
+; CHECK-NOV-NEXT: addi a5, a5, -1
; CHECK-NOV-NEXT: fcvt.w.s a2, fa2, rtz
; CHECK-NOV-NEXT: bge a1, a5, .LBB39_10
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -4641,7 +4641,7 @@ define <4 x i16> @utest_f32i16_mm(<4 x float> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.wu.s a1, fa0, rtz
; CHECK-NOV-NEXT: lui a3, 16
-; CHECK-NOV-NEXT: addiw a3, a3, -1
+; CHECK-NOV-NEXT: addi a3, a3, -1
; CHECK-NOV-NEXT: fcvt.wu.s a2, fa1, rtz
; CHECK-NOV-NEXT: bgeu a1, a3, .LBB40_6
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -4692,7 +4692,7 @@ define <4 x i16> @ustest_f32i16_mm(<4 x float> %x) {
; CHECK-NOV: # %bb.0: # %entry
; CHECK-NOV-NEXT: fcvt.w.s a1, fa3, rtz
; CHECK-NOV-NEXT: lui a3, 16
-; CHECK-NOV-NEXT: addiw a3, a3, -1
+; CHECK-NOV-NEXT: addi a3, a3, -1
; CHECK-NOV-NEXT: fcvt.w.s a2, fa2, rtz
; CHECK-NOV-NEXT: bge a1, a3, .LBB41_6
; CHECK-NOV-NEXT: # %bb.1: # %entry
@@ -4825,7 +4825,7 @@ define <8 x i16> @stest_f16i16_mm(<8 x half> %x) {
; CHECK-NOV-NEXT: call __extendhfsf2
; CHECK-NOV-NEXT: fcvt.l.s a0, fa0, rtz
; CHECK-NOV-NEXT: lui a7, 8
-; CHECK-NOV-NEXT: addiw a7, a7, -1
+; CHECK-NOV-NEXT: addi a7, a7, -1
; CHECK-NOV-NEXT: bge a0, a7, .LBB42_18
; CHECK-NOV-NEXT: # %bb.1: # %entry
; CHECK-NOV-NEXT: fcvt.l.s a1, fs5, rtz
@@ -5207,7 +5207,7 @@ define <8 x i16> @utesth_f16i16_mm(<8 x half> %x) {
; CHECK-NOV-NEXT: call __extendhfsf2
; CHECK-NOV-NEXT: fcvt.lu.s a0, fa0, rtz
; CHECK-NOV-NEXT: lui a3, 16
-; CHECK-NOV-NEXT: addiw a3, a3, -1
+; CHECK-NOV-NEXT: addi a3, a3, -1
; CHECK-NOV-NEXT: bgeu a0, a3, .LBB43_10
; CHECK-NOV-NEXT: # %bb.1: # %entry
; CHECK-NOV-NEXT: fcvt.lu.s a1, fs5, rtz
@@ -5546,7 +5546,7 @@ define <8 x i16> @ustest_f16i16_mm(<8 x half> %x) {
; CHECK-NOV-NEXT: call __extendhfsf2
; CHECK-NOV-NEXT: fcvt.l.s a0, fa0, rtz
; CHECK-NOV-NEXT: lui a3, 16
-; CHECK-NOV-NEXT: addiw a3, a3, -1
+; CHECK-NOV-NEXT: addi a3, a3, -1
; CHECK-NOV-NEXT: bge a0, a3, .LBB44_10
; CHECK-NOV-NEXT: # %bb.1: # %entry
; CHECK-NOV-NEXT: fcvt.l.s a1, fs5, rtz
diff --git a/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll b/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll
index cafc36bd1ef6f..a68f171a7bde7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll
@@ -461,7 +461,7 @@ define <vscale x 1 x float> @test5(<vscale x 1 x float> %0, <vscale x 1 x float>
; CHECK-NEXT: vfadd.vv v8, v8, v9
; CHECK-NEXT: lui a0, 66
; CHECK-NEXT: fsrm a2
-; CHECK-NEXT: addiw a0, a0, 769
+; CHECK-NEXT: addi a0, a0, 769
; CHECK-NEXT: frrm a2
; CHECK-NEXT: slli a2, a2, 2
; CHECK-NEXT: srl a0, a0, a2
@@ -477,7 +477,7 @@ define <vscale x 1 x float> @test5(<vscale x 1 x float> %0, <vscale x 1 x float>
; UNOPT-NEXT: vfadd.vv v8, v8, v9
; UNOPT-NEXT: lui a0, 66
; UNOPT-NEXT: fsrm a2
-; UNOPT-NEXT: addiw a0, a0, 769
+; UNOPT-NEXT: addi a0, a0, 769
; UNOPT-NEXT: frrm a2
; UNOPT-NEXT: slli a2, a2, 2
; UNOPT-NEXT: srl a0, a0, a2
@@ -590,7 +590,7 @@ define <vscale x 1 x float> @after_fsrm4(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK-NEXT: slli a0, a0, 32
; CHECK-NEXT: lui a2, 66
; CHECK-NEXT: srli a0, a0, 30
-; CHECK-NEXT: addiw a2, a2, 769
+; CHECK-NEXT: addi a2, a2, 769
; CHECK-NEXT: srl a0, a2, a0
; CHECK-NEXT: andi a0, a0, 7
; CHECK-NEXT: fsrm a0
@@ -603,7 +603,7 @@ define <vscale x 1 x float> @after_fsrm4(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT-NEXT: slli a0, a0, 32
; UNOPT-NEXT: lui a2, 66
; UNOPT-NEXT: srli a0, a0, 30
-; UNOPT-NEXT: addiw a2, a2, 769
+; UNOPT-NEXT: addi a2, a2, 769
; UNOPT-NEXT: srl a0, a2, a0
; UNOPT-NEXT: andi a0, a0, 7
; UNOPT-NEXT: fsrm a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/memset-inline.ll b/llvm/test/CodeGen/RISCV/rvv/memset-inline.ll
index 6bed05dcda154..896394017b6f1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/memset-inline.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/memset-inline.ll
@@ -138,7 +138,7 @@ define void @memset_8(ptr %a, i8 %value) nounwind {
; RV64-FAST: # %bb.0:
; RV64-FAST-NEXT: zext.b a1, a1
; RV64-FAST-NEXT: lui a2, 4112
-; RV64-FAST-NEXT: addiw a2, a2, 257
+; RV64-FAST-NEXT: addi a2, a2, 257
; RV64-FAST-NEXT: slli a3, a2, 32
; RV64-FAST-NEXT: add a2, a2, a3
; RV64-FAST-NEXT: mul a1, a1, a2
@@ -278,7 +278,7 @@ define void @aligned_memset_8(ptr align 8 %a, i8 %value) nounwind {
; RV64-BOTH: # %bb.0:
; RV64-BOTH-NEXT: zext.b a1, a1
; RV64-BOTH-NEXT: lui a2, 4112
-; RV64-BOTH-NEXT: addiw a2, a2, 257
+; RV64-BOTH-NEXT: addi a2, a2, 257
; RV64-BOTH-NEXT: slli a3, a2, 32
; RV64-BOTH-NEXT: add a2, a2, a3
; RV64-BOTH-NEXT: mul a1, a1, a2
diff --git a/llvm/test/CodeGen/RISCV/rvv/pr88799.ll b/llvm/test/CodeGen/RISCV/rvv/pr88799.ll
index 7212a789f9e7e..dbbec85a85c30 100644
--- a/llvm/test/CodeGen/RISCV/rvv/pr88799.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/pr88799.ll
@@ -5,7 +5,7 @@ define i32 @main() vscale_range(2,2) {
; CHECK-LABEL: main:
; CHECK: # %bb.0: # %vector.body
; CHECK-NEXT: lui a0, 1040368
-; CHECK-NEXT: addiw a0, a0, -144
+; CHECK-NEXT: addi a0, a0, -144
; CHECK-NEXT: vl2re16.v v8, (a0)
; CHECK-NEXT: vs2r.v v8, (zero)
; CHECK-NEXT: li a0, 0
diff --git a/llvm/test/CodeGen/RISCV/rvv/stack-probing-dynamic.ll b/llvm/test/CodeGen/RISCV/rvv/stack-probing-dynamic.ll
index 604271702ebad..d666832cf6e0b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/stack-probing-dynamic.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/stack-probing-dynamic.ll
@@ -467,7 +467,7 @@ define void @reserved_call_frame(i64 %n) #0 {
; RV64I-NEXT: add a0, sp, a0
; RV64I-NEXT: call callee_stack_args
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 48
+; RV64I-NEXT: addi a0, a0, 48
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 2032
; RV64I-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv/stack-probing-rvv.ll b/llvm/test/CodeGen/RISCV/rvv/stack-probing-rvv.ll
index d7f9ae73eaea5..02f03602e84ad 100644
--- a/llvm/test/CodeGen/RISCV/rvv/stack-probing-rvv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/stack-probing-rvv.ll
@@ -347,7 +347,7 @@ define void @f1_vector_4096_arr(ptr %out) #0 {
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: .cfi_def_cfa sp, 12304
; RV64IV-NEXT: lui a0, 3
-; RV64IV-NEXT: addiw a0, a0, 16
+; RV64IV-NEXT: addi a0, a0, 16
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: .cfi_def_cfa_offset 0
; RV64IV-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/stepvector.ll b/llvm/test/CodeGen/RISCV/rvv/stepvector.ll
index 62339130678d0..d4e2c08d70d3d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/stepvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/stepvector.ll
@@ -501,7 +501,7 @@ define <vscale x 8 x i64> @mul_bigimm_stepvector_nxv8i64() {
; RV64-NEXT: vsetvli a0, zero, e64, m8, ta, ma
; RV64-NEXT: vid.v v8
; RV64-NEXT: lui a0, 1987
-; RV64-NEXT: addiw a0, a0, -731
+; RV64-NEXT: addi a0, a0, -731
; RV64-NEXT: slli a0, a0, 12
; RV64-NEXT: addi a0, a0, -683
; RV64-NEXT: vmul.vx v8, v8, a0
@@ -670,7 +670,7 @@ define <vscale x 16 x i64> @mul_bigimm_stepvector_nxv16i64() {
; RV64-NEXT: lui a1, 1987
; RV64-NEXT: vsetvli a2, zero, e64, m8, ta, ma
; RV64-NEXT: vid.v v8
-; RV64-NEXT: addiw a1, a1, -731
+; RV64-NEXT: addi a1, a1, -731
; RV64-NEXT: slli a1, a1, 12
; RV64-NEXT: addi a1, a1, -683
; RV64-NEXT: mul a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/rvv/trunc-sat-clip-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/trunc-sat-clip-sdnode.ll
index 01a90d8a33b6e..3ed437eeed2ff 100644
--- a/llvm/test/CodeGen/RISCV/rvv/trunc-sat-clip-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/trunc-sat-clip-sdnode.ll
@@ -267,7 +267,7 @@ define void @trunc_sat_i32i64_notopt(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: vl4re64.v v8, (a0)
; CHECK-NEXT: lui a0, 524288
-; CHECK-NEXT: addiw a0, a0, 1
+; CHECK-NEXT: addi a0, a0, 1
; CHECK-NEXT: vsetvli a2, zero, e64, m4, ta, ma
; CHECK-NEXT: vmax.vx v8, v8, a0
; CHECK-NEXT: li a0, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll
index f2b566e83616a..e5ba9a0a9bc5d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll
@@ -7,7 +7,7 @@ define void @foo(<vscale x 8 x half> %0) {
; CHECK-NEXT: vsetvli a0, zero, e32, m1, ta, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: lui a0, 1
-; CHECK-NEXT: addiw a0, a0, -1096
+; CHECK-NEXT: addi a0, a0, -1096
; CHECK-NEXT: vmv.v.i v11, 0
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; CHECK-NEXT: #APP
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsplats-i64.ll b/llvm/test/CodeGen/RISCV/rvv/vsplats-i64.ll
index 3a747449d0a66..1e1bf5e5d5088 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsplats-i64.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsplats-i64.ll
@@ -115,21 +115,13 @@ define <vscale x 8 x i64> @vadd_vx_nxv8i64_8(<vscale x 8 x i64> %v) {
}
define <vscale x 8 x i64> @vadd_vx_nxv8i64_9(<vscale x 8 x i64> %v) {
-; RV32V-LABEL: vadd_vx_nxv8i64_9:
-; RV32V: # %bb.0:
-; RV32V-NEXT: lui a0, 503808
-; RV32V-NEXT: addi a0, a0, -1281
-; RV32V-NEXT: vsetvli a1, zero, e64, m8, ta, ma
-; RV32V-NEXT: vadd.vx v8, v8, a0
-; RV32V-NEXT: ret
-;
-; RV64V-LABEL: vadd_vx_nxv8i64_9:
-; RV64V: # %bb.0:
-; RV64V-NEXT: lui a0, 503808
-; RV64V-NEXT: addiw a0, a0, -1281
-; RV64V-NEXT: vsetvli a1, zero, e64, m8, ta, ma
-; RV64V-NEXT: vadd.vx v8, v8, a0
-; RV64V-NEXT: ret
+; CHECK-LABEL: vadd_vx_nxv8i64_9:
+; CHECK: # %bb.0:
+; CHECK-NEXT: lui a0, 503808
+; CHECK-NEXT: addi a0, a0, -1281
+; CHECK-NEXT: vsetvli a1, zero, e64, m8, ta, ma
+; CHECK-NEXT: vadd.vx v8, v8, a0
+; CHECK-NEXT: ret
%vret = add <vscale x 8 x i64> %v, splat (i64 2063596287)
ret <vscale x 8 x i64> %vret
}
diff --git a/llvm/test/CodeGen/RISCV/rvv/zvqdotq-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/zvqdotq-sdnode.ll
index 2bd2ef2878fd5..2c2e97a12a658 100644
--- a/llvm/test/CodeGen/RISCV/rvv/zvqdotq-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/zvqdotq-sdnode.ll
@@ -230,29 +230,17 @@ define i32 @reduce_of_sext(<vscale x 16 x i8> %a) {
; NODOT-NEXT: vmv.x.s a0, v8
; NODOT-NEXT: ret
;
-; DOT32-LABEL: reduce_of_sext:
-; DOT32: # %bb.0: # %entry
-; DOT32-NEXT: vsetvli a0, zero, e32, m2, ta, ma
-; DOT32-NEXT: vmv.v.i v10, 0
-; DOT32-NEXT: lui a0, 4112
-; DOT32-NEXT: addi a0, a0, 257
-; DOT32-NEXT: vqdot.vx v10, v8, a0
-; DOT32-NEXT: vmv.s.x v8, zero
-; DOT32-NEXT: vredsum.vs v8, v10, v8
-; DOT32-NEXT: vmv.x.s a0, v8
-; DOT32-NEXT: ret
-;
-; DOT64-LABEL: reduce_of_sext:
-; DOT64: # %bb.0: # %entry
-; DOT64-NEXT: vsetvli a0, zero, e32, m2, ta, ma
-; DOT64-NEXT: vmv.v.i v10, 0
-; DOT64-NEXT: lui a0, 4112
-; DOT64-NEXT: addiw a0, a0, 257
-; DOT64-NEXT: vqdot.vx v10, v8, a0
-; DOT64-NEXT: vmv.s.x v8, zero
-; DOT64-NEXT: vredsum.vs v8, v10, v8
-; DOT64-NEXT: vmv.x.s a0, v8
-; DOT64-NEXT: ret
+; DOT-LABEL: reduce_of_sext:
+; DOT: # %bb.0: # %entry
+; DOT-NEXT: vsetvli a0, zero, e32, m2, ta, ma
+; DOT-NEXT: vmv.v.i v10, 0
+; DOT-NEXT: lui a0, 4112
+; DOT-NEXT: addi a0, a0, 257
+; DOT-NEXT: vqdot.vx v10, v8, a0
+; DOT-NEXT: vmv.s.x v8, zero
+; DOT-NEXT: vredsum.vs v8, v10, v8
+; DOT-NEXT: vmv.x.s a0, v8
+; DOT-NEXT: ret
entry:
%a.ext = sext <vscale x 16 x i8> %a to <vscale x 16 x i32>
%res = tail call i32 @llvm.vector.reduce.add.v16i32(<vscale x 16 x i32> %a.ext)
@@ -269,29 +257,17 @@ define i32 @reduce_of_zext(<vscale x 16 x i8> %a) {
; NODOT-NEXT: vmv.x.s a0, v8
; NODOT-NEXT: ret
;
-; DOT32-LABEL: reduce_of_zext:
-; DOT32: # %bb.0: # %entry
-; DOT32-NEXT: vsetvli a0, zero, e32, m2, ta, ma
-; DOT32-NEXT: vmv.v.i v10, 0
-; DOT32-NEXT: lui a0, 4112
-; DOT32-NEXT: addi a0, a0, 257
-; DOT32-NEXT: vqdotu.vx v10, v8, a0
-; DOT32-NEXT: vmv.s.x v8, zero
-; DOT32-NEXT: vredsum.vs v8, v10, v8
-; DOT32-NEXT: vmv.x.s a0, v8
-; DOT32-NEXT: ret
-;
-; DOT64-LABEL: reduce_of_zext:
-; DOT64: # %bb.0: # %entry
-; DOT64-NEXT: vsetvli a0, zero, e32, m2, ta, ma
-; DOT64-NEXT: vmv.v.i v10, 0
-; DOT64-NEXT: lui a0, 4112
-; DOT64-NEXT: addiw a0, a0, 257
-; DOT64-NEXT: vqdotu.vx v10, v8, a0
-; DOT64-NEXT: vmv.s.x v8, zero
-; DOT64-NEXT: vredsum.vs v8, v10, v8
-; DOT64-NEXT: vmv.x.s a0, v8
-; DOT64-NEXT: ret
+; DOT-LABEL: reduce_of_zext:
+; DOT: # %bb.0: # %entry
+; DOT-NEXT: vsetvli a0, zero, e32, m2, ta, ma
+; DOT-NEXT: vmv.v.i v10, 0
+; DOT-NEXT: lui a0, 4112
+; DOT-NEXT: addi a0, a0, 257
+; DOT-NEXT: vqdotu.vx v10, v8, a0
+; DOT-NEXT: vmv.s.x v8, zero
+; DOT-NEXT: vredsum.vs v8, v10, v8
+; DOT-NEXT: vmv.x.s a0, v8
+; DOT-NEXT: ret
entry:
%a.ext = zext <vscale x 16 x i8> %a to <vscale x 16 x i32>
%res = tail call i32 @llvm.vector.reduce.add.v16i32(<vscale x 16 x i32> %a.ext)
@@ -957,3 +933,6 @@ entry:
%res = call <vscale x 1 x i32> @llvm.experimental.vector.partial.reduce.add(<vscale x 1 x i32> zeroinitializer, <vscale x 4 x i32> %mul)
ret <vscale x 1 x i32> %res
}
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; DOT32: {{.*}}
+; DOT64: {{.*}}
diff --git a/llvm/test/CodeGen/RISCV/sadd_sat.ll b/llvm/test/CodeGen/RISCV/sadd_sat.ll
index ab03ccc4ba590..04f2436201e9d 100644
--- a/llvm/test/CodeGen/RISCV/sadd_sat.ll
+++ b/llvm/test/CodeGen/RISCV/sadd_sat.ll
@@ -133,7 +133,7 @@ define signext i16 @func16(i16 signext %x, i16 signext %y) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: lui a1, 8
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: bge a0, a1, .LBB2_3
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: lui a1, 1048568
@@ -162,7 +162,7 @@ define signext i16 @func16(i16 signext %x, i16 signext %y) nounwind {
; RV64IZbb: # %bb.0:
; RV64IZbb-NEXT: add a0, a0, a1
; RV64IZbb-NEXT: lui a1, 8
-; RV64IZbb-NEXT: addiw a1, a1, -1
+; RV64IZbb-NEXT: addi a1, a1, -1
; RV64IZbb-NEXT: min a0, a0, a1
; RV64IZbb-NEXT: lui a1, 1048568
; RV64IZbb-NEXT: max a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/sadd_sat_plus.ll b/llvm/test/CodeGen/RISCV/sadd_sat_plus.ll
index abcf3379d0a6e..857026cce0d43 100644
--- a/llvm/test/CodeGen/RISCV/sadd_sat_plus.ll
+++ b/llvm/test/CodeGen/RISCV/sadd_sat_plus.ll
@@ -148,7 +148,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64I-NEXT: slli a1, a1, 48
; RV64I-NEXT: srai a1, a1, 48
; RV64I-NEXT: add a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, -1
+; RV64I-NEXT: addi a1, a2, -1
; RV64I-NEXT: bge a0, a1, .LBB2_3
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: lui a1, 1048568
@@ -182,7 +182,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64IZbb-NEXT: mul a1, a1, a2
; RV64IZbb-NEXT: lui a2, 8
; RV64IZbb-NEXT: sext.h a1, a1
-; RV64IZbb-NEXT: addiw a2, a2, -1
+; RV64IZbb-NEXT: addi a2, a2, -1
; RV64IZbb-NEXT: add a0, a0, a1
; RV64IZbb-NEXT: min a0, a0, a2
; RV64IZbb-NEXT: lui a1, 1048568
diff --git a/llvm/test/CodeGen/RISCV/select-cc.ll b/llvm/test/CodeGen/RISCV/select-cc.ll
index e69dc303d85dc..ec1f8aeddcaaf 100644
--- a/llvm/test/CodeGen/RISCV/select-cc.ll
+++ b/llvm/test/CodeGen/RISCV/select-cc.ll
@@ -358,7 +358,7 @@ define i32 @select_sge_int16min(i32 signext %x, i32 signext %y, i32 signext %z)
; RV64I-LABEL: select_sge_int16min:
; RV64I: # %bb.0:
; RV64I-NEXT: lui a3, 1048560
-; RV64I-NEXT: addiw a3, a3, -1
+; RV64I-NEXT: addi a3, a3, -1
; RV64I-NEXT: blt a3, a0, .LBB2_2
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: mv a1, a2
@@ -369,7 +369,7 @@ define i32 @select_sge_int16min(i32 signext %x, i32 signext %y, i32 signext %z)
; RV64I-CCMOV-LABEL: select_sge_int16min:
; RV64I-CCMOV: # %bb.0:
; RV64I-CCMOV-NEXT: lui a3, 1048560
-; RV64I-CCMOV-NEXT: addiw a3, a3, -1
+; RV64I-CCMOV-NEXT: addi a3, a3, -1
; RV64I-CCMOV-NEXT: slt a0, a3, a0
; RV64I-CCMOV-NEXT: mips.ccmov a0, a0, a1, a2
; RV64I-CCMOV-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/select-const.ll b/llvm/test/CodeGen/RISCV/select-const.ll
index 4538572e90cac..bc56408c0ca0f 100644
--- a/llvm/test/CodeGen/RISCV/select-const.ll
+++ b/llvm/test/CodeGen/RISCV/select-const.ll
@@ -413,7 +413,7 @@ define i32 @select_eq_10000_10001(i32 signext %a, i32 signext %b) {
; RV64-NEXT: xor a0, a0, a1
; RV64-NEXT: lui a1, 2
; RV64-NEXT: seqz a0, a0
-; RV64-NEXT: addiw a1, a1, 1810
+; RV64-NEXT: addi a1, a1, 1810
; RV64-NEXT: sub a0, a1, a0
; RV64-NEXT: ret
%1 = icmp eq i32 %a, %b
@@ -436,7 +436,7 @@ define i32 @select_ne_10001_10002(i32 signext %a, i32 signext %b) {
; RV64-NEXT: xor a0, a0, a1
; RV64-NEXT: lui a1, 2
; RV64-NEXT: snez a0, a0
-; RV64-NEXT: addiw a1, a1, 1810
+; RV64-NEXT: addi a1, a1, 1810
; RV64-NEXT: sub a0, a1, a0
; RV64-NEXT: ret
%1 = icmp ne i32 %a, %b
diff --git a/llvm/test/CodeGen/RISCV/select.ll b/llvm/test/CodeGen/RISCV/select.ll
index 303c4ac23b313..0ea80bf592999 100644
--- a/llvm/test/CodeGen/RISCV/select.ll
+++ b/llvm/test/CodeGen/RISCV/select.ll
@@ -1111,7 +1111,7 @@ define i32 @select_udiv_3(i1 zeroext %cond, i32 %a) {
; RV64IM-NEXT: # %bb.1: # %entry
; RV64IM-NEXT: srliw a0, a1, 1
; RV64IM-NEXT: lui a1, 199729
-; RV64IM-NEXT: addiw a1, a1, -975
+; RV64IM-NEXT: addi a1, a1, -975
; RV64IM-NEXT: mul a1, a0, a1
; RV64IM-NEXT: srli a1, a1, 34
; RV64IM-NEXT: .LBB27_2: # %entry
@@ -1122,7 +1122,7 @@ define i32 @select_udiv_3(i1 zeroext %cond, i32 %a) {
; RV64IMXVTCONDOPS: # %bb.0: # %entry
; RV64IMXVTCONDOPS-NEXT: srliw a2, a1, 1
; RV64IMXVTCONDOPS-NEXT: lui a3, 199729
-; RV64IMXVTCONDOPS-NEXT: addiw a3, a3, -975
+; RV64IMXVTCONDOPS-NEXT: addi a3, a3, -975
; RV64IMXVTCONDOPS-NEXT: mul a2, a2, a3
; RV64IMXVTCONDOPS-NEXT: srli a2, a2, 34
; RV64IMXVTCONDOPS-NEXT: vt.maskc a1, a1, a0
@@ -1146,7 +1146,7 @@ define i32 @select_udiv_3(i1 zeroext %cond, i32 %a) {
; RV64IMZICOND: # %bb.0: # %entry
; RV64IMZICOND-NEXT: srliw a2, a1, 1
; RV64IMZICOND-NEXT: lui a3, 199729
-; RV64IMZICOND-NEXT: addiw a3, a3, -975
+; RV64IMZICOND-NEXT: addi a3, a3, -975
; RV64IMZICOND-NEXT: mul a2, a2, a3
; RV64IMZICOND-NEXT: srli a2, a2, 34
; RV64IMZICOND-NEXT: czero.eqz a1, a1, a0
@@ -1533,50 +1533,14 @@ define i32 @select_cst_not4(i32 signext %a, i32 signext %b) {
}
define i32 @select_cst_not5(i32 signext %a, i32 signext %b) {
-; RV32IM-LABEL: select_cst_not5:
-; RV32IM: # %bb.0:
-; RV32IM-NEXT: slt a0, a0, a1
-; RV32IM-NEXT: lui a1, 16
-; RV32IM-NEXT: neg a0, a0
-; RV32IM-NEXT: addi a1, a1, -5
-; RV32IM-NEXT: xor a0, a0, a1
-; RV32IM-NEXT: ret
-;
-; RV64IM-LABEL: select_cst_not5:
-; RV64IM: # %bb.0:
-; RV64IM-NEXT: slt a0, a0, a1
-; RV64IM-NEXT: lui a1, 16
-; RV64IM-NEXT: neg a0, a0
-; RV64IM-NEXT: addiw a1, a1, -5
-; RV64IM-NEXT: xor a0, a0, a1
-; RV64IM-NEXT: ret
-;
-; RV64IMXVTCONDOPS-LABEL: select_cst_not5:
-; RV64IMXVTCONDOPS: # %bb.0:
-; RV64IMXVTCONDOPS-NEXT: slt a0, a0, a1
-; RV64IMXVTCONDOPS-NEXT: lui a1, 16
-; RV64IMXVTCONDOPS-NEXT: neg a0, a0
-; RV64IMXVTCONDOPS-NEXT: addiw a1, a1, -5
-; RV64IMXVTCONDOPS-NEXT: xor a0, a0, a1
-; RV64IMXVTCONDOPS-NEXT: ret
-;
-; RV32IMZICOND-LABEL: select_cst_not5:
-; RV32IMZICOND: # %bb.0:
-; RV32IMZICOND-NEXT: slt a0, a0, a1
-; RV32IMZICOND-NEXT: lui a1, 16
-; RV32IMZICOND-NEXT: neg a0, a0
-; RV32IMZICOND-NEXT: addi a1, a1, -5
-; RV32IMZICOND-NEXT: xor a0, a0, a1
-; RV32IMZICOND-NEXT: ret
-;
-; RV64IMZICOND-LABEL: select_cst_not5:
-; RV64IMZICOND: # %bb.0:
-; RV64IMZICOND-NEXT: slt a0, a0, a1
-; RV64IMZICOND-NEXT: lui a1, 16
-; RV64IMZICOND-NEXT: neg a0, a0
-; RV64IMZICOND-NEXT: addiw a1, a1, -5
-; RV64IMZICOND-NEXT: xor a0, a0, a1
-; RV64IMZICOND-NEXT: ret
+; CHECK-LABEL: select_cst_not5:
+; CHECK: # %bb.0:
+; CHECK-NEXT: slt a0, a0, a1
+; CHECK-NEXT: lui a1, 16
+; CHECK-NEXT: neg a0, a0
+; CHECK-NEXT: addi a1, a1, -5
+; CHECK-NEXT: xor a0, a0, a1
+; CHECK-NEXT: ret
%cond = icmp slt i32 %a, %b
%ret = select i1 %cond, i32 -65532, i32 65531
ret i32 %ret
@@ -1678,7 +1642,7 @@ define i32 @select_cst2(i1 zeroext %cond) {
; RV64IM-NEXT: bnez a0, .LBB44_2
; RV64IM-NEXT: # %bb.1:
; RV64IM-NEXT: lui a0, 5
-; RV64IM-NEXT: addiw a0, a0, -480
+; RV64IM-NEXT: addi a0, a0, -480
; RV64IM-NEXT: ret
; RV64IM-NEXT: .LBB44_2:
; RV64IM-NEXT: li a0, 10
@@ -1687,26 +1651,18 @@ define i32 @select_cst2(i1 zeroext %cond) {
; RV64IMXVTCONDOPS-LABEL: select_cst2:
; RV64IMXVTCONDOPS: # %bb.0:
; RV64IMXVTCONDOPS-NEXT: lui a1, 5
-; RV64IMXVTCONDOPS-NEXT: addiw a1, a1, -490
+; RV64IMXVTCONDOPS-NEXT: addi a1, a1, -490
; RV64IMXVTCONDOPS-NEXT: vt.maskcn a0, a1, a0
; RV64IMXVTCONDOPS-NEXT: addi a0, a0, 10
; RV64IMXVTCONDOPS-NEXT: ret
;
-; RV32IMZICOND-LABEL: select_cst2:
-; RV32IMZICOND: # %bb.0:
-; RV32IMZICOND-NEXT: lui a1, 5
-; RV32IMZICOND-NEXT: addi a1, a1, -490
-; RV32IMZICOND-NEXT: czero.nez a0, a1, a0
-; RV32IMZICOND-NEXT: addi a0, a0, 10
-; RV32IMZICOND-NEXT: ret
-;
-; RV64IMZICOND-LABEL: select_cst2:
-; RV64IMZICOND: # %bb.0:
-; RV64IMZICOND-NEXT: lui a1, 5
-; RV64IMZICOND-NEXT: addiw a1, a1, -490
-; RV64IMZICOND-NEXT: czero.nez a0, a1, a0
-; RV64IMZICOND-NEXT: addi a0, a0, 10
-; RV64IMZICOND-NEXT: ret
+; CHECKZICOND-LABEL: select_cst2:
+; CHECKZICOND: # %bb.0:
+; CHECKZICOND-NEXT: lui a1, 5
+; CHECKZICOND-NEXT: addi a1, a1, -490
+; CHECKZICOND-NEXT: czero.nez a0, a1, a0
+; CHECKZICOND-NEXT: addi a0, a0, 10
+; CHECKZICOND-NEXT: ret
%ret = select i1 %cond, i32 10, i32 20000
ret i32 %ret
}
@@ -1729,42 +1685,32 @@ define i32 @select_cst3(i1 zeroext %cond) {
; RV64IM-NEXT: bnez a0, .LBB45_2
; RV64IM-NEXT: # %bb.1:
; RV64IM-NEXT: lui a0, 5
-; RV64IM-NEXT: addiw a0, a0, -480
+; RV64IM-NEXT: addi a0, a0, -480
; RV64IM-NEXT: ret
; RV64IM-NEXT: .LBB45_2:
; RV64IM-NEXT: lui a0, 7
-; RV64IM-NEXT: addiw a0, a0, 1328
+; RV64IM-NEXT: addi a0, a0, 1328
; RV64IM-NEXT: ret
;
; RV64IMXVTCONDOPS-LABEL: select_cst3:
; RV64IMXVTCONDOPS: # %bb.0:
; RV64IMXVTCONDOPS-NEXT: lui a1, 1048574
-; RV64IMXVTCONDOPS-NEXT: addiw a1, a1, -1808
+; RV64IMXVTCONDOPS-NEXT: addi a1, a1, -1808
; RV64IMXVTCONDOPS-NEXT: vt.maskcn a0, a1, a0
; RV64IMXVTCONDOPS-NEXT: lui a1, 7
-; RV64IMXVTCONDOPS-NEXT: addiw a1, a1, 1328
+; RV64IMXVTCONDOPS-NEXT: addi a1, a1, 1328
; RV64IMXVTCONDOPS-NEXT: add a0, a0, a1
; RV64IMXVTCONDOPS-NEXT: ret
;
-; RV32IMZICOND-LABEL: select_cst3:
-; RV32IMZICOND: # %bb.0:
-; RV32IMZICOND-NEXT: lui a1, 1048574
-; RV32IMZICOND-NEXT: addi a1, a1, -1808
-; RV32IMZICOND-NEXT: czero.nez a0, a1, a0
-; RV32IMZICOND-NEXT: lui a1, 7
-; RV32IMZICOND-NEXT: addi a1, a1, 1328
-; RV32IMZICOND-NEXT: add a0, a0, a1
-; RV32IMZICOND-NEXT: ret
-;
-; RV64IMZICOND-LABEL: select_cst3:
-; RV64IMZICOND: # %bb.0:
-; RV64IMZICOND-NEXT: lui a1, 1048574
-; RV64IMZICOND-NEXT: addiw a1, a1, -1808
-; RV64IMZICOND-NEXT: czero.nez a0, a1, a0
-; RV64IMZICOND-NEXT: lui a1, 7
-; RV64IMZICOND-NEXT: addiw a1, a1, 1328
-; RV64IMZICOND-NEXT: add a0, a0, a1
-; RV64IMZICOND-NEXT: ret
+; CHECKZICOND-LABEL: select_cst3:
+; CHECKZICOND: # %bb.0:
+; CHECKZICOND-NEXT: lui a1, 1048574
+; CHECKZICOND-NEXT: addi a1, a1, -1808
+; CHECKZICOND-NEXT: czero.nez a0, a1, a0
+; CHECKZICOND-NEXT: lui a1, 7
+; CHECKZICOND-NEXT: addi a1, a1, 1328
+; CHECKZICOND-NEXT: add a0, a0, a1
+; CHECKZICOND-NEXT: ret
%ret = select i1 %cond, i32 30000, i32 20000
ret i32 %ret
}
@@ -1796,7 +1742,7 @@ define i32 @select_cst5(i1 zeroext %cond) {
; RV64IM-NEXT: bnez a0, .LBB47_2
; RV64IM-NEXT: # %bb.1:
; RV64IM-NEXT: lui a0, 1
-; RV64IM-NEXT: addiw a0, a0, -2047
+; RV64IM-NEXT: addi a0, a0, -2047
; RV64IM-NEXT: ret
; RV64IM-NEXT: .LBB47_2:
; RV64IM-NEXT: li a0, 2047
@@ -1839,7 +1785,7 @@ define i32 @select_cst5_invert(i1 zeroext %cond) {
; RV64IM-NEXT: ret
; RV64IM-NEXT: .LBB48_2:
; RV64IM-NEXT: lui a0, 1
-; RV64IM-NEXT: addiw a0, a0, -2047
+; RV64IM-NEXT: addi a0, a0, -2047
; RV64IM-NEXT: ret
;
; RV64IMXVTCONDOPS-LABEL: select_cst5_invert:
diff --git a/llvm/test/CodeGen/RISCV/sextw-removal.ll b/llvm/test/CodeGen/RISCV/sextw-removal.ll
index 49494608eee4d..1a978d1a0fcac 100644
--- a/llvm/test/CodeGen/RISCV/sextw-removal.ll
+++ b/llvm/test/CodeGen/RISCV/sextw-removal.ll
@@ -180,8 +180,8 @@ define void @test5(i32 signext %arg, i32 signext %arg1) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw s0, a1, 1365
-; RV64I-NEXT: addiw s1, a2, 819
+; RV64I-NEXT: addi s0, a1, 1365
+; RV64I-NEXT: addi s1, a2, 819
; RV64I-NEXT: addi s2, a3, -241
; RV64I-NEXT: addi s3, a4, 257
; RV64I-NEXT: .LBB4_1: # %bb2
@@ -327,10 +327,10 @@ define void @test7(i32 signext %arg, i32 signext %arg1) nounwind {
; RV64I-NEXT: lui a2, 209715
; RV64I-NEXT: lui a3, 61681
; RV64I-NEXT: lui a4, 4112
-; RV64I-NEXT: addiw s0, a1, 1365
-; RV64I-NEXT: addiw s1, a2, 819
-; RV64I-NEXT: addiw s2, a3, -241
-; RV64I-NEXT: addiw s3, a4, 257
+; RV64I-NEXT: addi s0, a1, 1365
+; RV64I-NEXT: addi s1, a2, 819
+; RV64I-NEXT: addi s2, a3, -241
+; RV64I-NEXT: addi s3, a4, 257
; RV64I-NEXT: slli a1, s0, 32
; RV64I-NEXT: add s0, s0, a1
; RV64I-NEXT: slli a1, s1, 32
diff --git a/llvm/test/CodeGen/RISCV/shl-cttz.ll b/llvm/test/CodeGen/RISCV/shl-cttz.ll
index 500673cc29196..99dc4f816d669 100644
--- a/llvm/test/CodeGen/RISCV/shl-cttz.ll
+++ b/llvm/test/CodeGen/RISCV/shl-cttz.ll
@@ -158,11 +158,11 @@ define i16 @shl_cttz_i16(i16 %x, i16 %y) {
; RV64I-NEXT: not a1, a1
; RV64I-NEXT: lui a3, 5
; RV64I-NEXT: and a1, a1, a2
-; RV64I-NEXT: addiw a2, a3, 1365
+; RV64I-NEXT: addi a2, a3, 1365
; RV64I-NEXT: srli a3, a1, 1
; RV64I-NEXT: and a2, a3, a2
; RV64I-NEXT: lui a3, 3
-; RV64I-NEXT: addiw a3, a3, 819
+; RV64I-NEXT: addi a3, a3, 819
; RV64I-NEXT: sub a1, a1, a2
; RV64I-NEXT: and a2, a1, a3
; RV64I-NEXT: srli a1, a1, 2
@@ -228,11 +228,11 @@ define i16 @shl_cttz_constant_i16(i16 %y) {
; RV64I-NEXT: not a0, a0
; RV64I-NEXT: lui a2, 5
; RV64I-NEXT: and a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, 1365
+; RV64I-NEXT: addi a1, a2, 1365
; RV64I-NEXT: srli a2, a0, 1
; RV64I-NEXT: and a1, a2, a1
; RV64I-NEXT: lui a2, 3
-; RV64I-NEXT: addiw a2, a2, 819
+; RV64I-NEXT: addi a2, a2, 819
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: and a1, a0, a2
; RV64I-NEXT: srli a0, a0, 2
diff --git a/llvm/test/CodeGen/RISCV/shlimm-addimm.ll b/llvm/test/CodeGen/RISCV/shlimm-addimm.ll
index c842ba5da5208..da7f62cd3ff0c 100644
--- a/llvm/test/CodeGen/RISCV/shlimm-addimm.ll
+++ b/llvm/test/CodeGen/RISCV/shlimm-addimm.ll
@@ -125,7 +125,7 @@ define i64 @shl5_add101024_c(i64 %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: slli a0, a0, 5
; RV64I-NEXT: lui a1, 25
-; RV64I-NEXT: addiw a1, a1, -1376
+; RV64I-NEXT: addi a1, a1, -1376
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
%tmp0 = shl i64 %x, 5
@@ -193,7 +193,7 @@ define i64 @shl5_add47968_c(i64 %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: slli a0, a0, 5
; RV64I-NEXT: lui a1, 12
-; RV64I-NEXT: addiw a1, a1, -1184
+; RV64I-NEXT: addi a1, a1, -1184
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
%tmp0 = shl i64 %x, 5
@@ -261,7 +261,7 @@ define i64 @shl5_add47969_c(i64 %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: slli a0, a0, 5
; RV64I-NEXT: lui a1, 12
-; RV64I-NEXT: addiw a1, a1, -1183
+; RV64I-NEXT: addi a1, a1, -1183
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
%tmp0 = shl i64 %x, 5
@@ -330,7 +330,7 @@ define i64 @shl5_sub47968_c(i64 %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: slli a0, a0, 5
; RV64I-NEXT: lui a1, 1048564
-; RV64I-NEXT: addiw a1, a1, 1184
+; RV64I-NEXT: addi a1, a1, 1184
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
%tmp0 = shl i64 %x, 5
@@ -399,7 +399,7 @@ define i64 @shl5_sub47969_c(i64 %x) {
; RV64I: # %bb.0:
; RV64I-NEXT: slli a0, a0, 5
; RV64I-NEXT: lui a1, 1048564
-; RV64I-NEXT: addiw a1, a1, 1183
+; RV64I-NEXT: addi a1, a1, 1183
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: ret
%tmp0 = shl i64 %x, 5
diff --git a/llvm/test/CodeGen/RISCV/signed-truncation-check.ll b/llvm/test/CodeGen/RISCV/signed-truncation-check.ll
index d43dfd46d62fc..e70c8e3544699 100644
--- a/llvm/test/CodeGen/RISCV/signed-truncation-check.ll
+++ b/llvm/test/CodeGen/RISCV/signed-truncation-check.ll
@@ -349,7 +349,7 @@ define i1 @add_ugecmp_i32_i16(i32 %x) nounwind {
; RV64I-NEXT: lui a1, 1048568
; RV64I-NEXT: addw a0, a0, a1
; RV64I-NEXT: lui a1, 1048560
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: sltu a0, a1, a0
; RV64I-NEXT: ret
;
@@ -429,7 +429,7 @@ define i1 @add_ugecmp_i64_i16(i64 %x) nounwind {
; RV64I-NEXT: lui a1, 1048568
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: lui a1, 1048560
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: sltu a0, a1, a0
; RV64I-NEXT: ret
;
@@ -805,7 +805,7 @@ define i1 @add_ultcmp_bad_i16_i8_cmp(i16 %x, i16 %y) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a2, 16
; RV64I-NEXT: addi a0, a0, 128
-; RV64I-NEXT: addiw a2, a2, -1
+; RV64I-NEXT: addi a2, a2, -1
; RV64I-NEXT: and a1, a1, a2
; RV64I-NEXT: and a0, a0, a2
; RV64I-NEXT: sltu a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/split-offsets.ll b/llvm/test/CodeGen/RISCV/split-offsets.ll
index cecd34956df8c..bf08270e0bb97 100644
--- a/llvm/test/CodeGen/RISCV/split-offsets.ll
+++ b/llvm/test/CodeGen/RISCV/split-offsets.ll
@@ -25,17 +25,16 @@ define void @test1(ptr %sp, ptr %t, i32 %n) {
;
; RV64I-LABEL: test1:
; RV64I: # %bb.0: # %entry
-; RV64I-NEXT: lui a2, 20
; RV64I-NEXT: ld a0, 0(a0)
+; RV64I-NEXT: lui a2, 20
; RV64I-NEXT: li a3, 2
-; RV64I-NEXT: addiw a2, a2, -1920
; RV64I-NEXT: add a1, a1, a2
; RV64I-NEXT: add a0, a0, a2
; RV64I-NEXT: li a2, 1
-; RV64I-NEXT: sw a3, 0(a0)
-; RV64I-NEXT: sw a2, 4(a0)
-; RV64I-NEXT: sw a2, 0(a1)
-; RV64I-NEXT: sw a3, 4(a1)
+; RV64I-NEXT: sw a3, -1920(a0)
+; RV64I-NEXT: sw a2, -1916(a0)
+; RV64I-NEXT: sw a2, -1920(a1)
+; RV64I-NEXT: sw a3, -1916(a1)
; RV64I-NEXT: ret
entry:
%s = load ptr, ptr %sp
@@ -77,7 +76,6 @@ define void @test2(ptr %sp, ptr %t, i32 %n) {
; RV64I-NEXT: li a3, 0
; RV64I-NEXT: ld a0, 0(a0)
; RV64I-NEXT: lui a4, 20
-; RV64I-NEXT: addiw a4, a4, -1920
; RV64I-NEXT: add a1, a1, a4
; RV64I-NEXT: add a0, a0, a4
; RV64I-NEXT: sext.w a2, a2
@@ -85,10 +83,10 @@ define void @test2(ptr %sp, ptr %t, i32 %n) {
; RV64I-NEXT: .LBB1_1: # %while_body
; RV64I-NEXT: # =>This Inner Loop Header: Depth=1
; RV64I-NEXT: addiw a4, a3, 1
-; RV64I-NEXT: sw a4, 0(a0)
-; RV64I-NEXT: sw a3, 4(a0)
-; RV64I-NEXT: sw a4, 0(a1)
-; RV64I-NEXT: sw a3, 4(a1)
+; RV64I-NEXT: sw a4, -1920(a0)
+; RV64I-NEXT: sw a3, -1916(a0)
+; RV64I-NEXT: sw a4, -1920(a1)
+; RV64I-NEXT: sw a3, -1916(a1)
; RV64I-NEXT: mv a3, a4
; RV64I-NEXT: blt a4, a2, .LBB1_1
; RV64I-NEXT: .LBB1_2: # %while_end
@@ -134,11 +132,10 @@ define void @test3(ptr %t) {
; RV64I: # %bb.0: # %entry
; RV64I-NEXT: lui a1, 20
; RV64I-NEXT: li a2, 2
-; RV64I-NEXT: addiw a1, a1, -1920
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: li a1, 3
-; RV64I-NEXT: sw a2, 4(a0)
-; RV64I-NEXT: sw a1, 8(a0)
+; RV64I-NEXT: sw a2, -1916(a0)
+; RV64I-NEXT: sw a1, -1912(a0)
; RV64I-NEXT: ret
entry:
%splitgep = getelementptr i8, ptr %t, i64 80000
diff --git a/llvm/test/CodeGen/RISCV/split-udiv-by-constant.ll b/llvm/test/CodeGen/RISCV/split-udiv-by-constant.ll
index 83ae03452db5b..eb70d7f43c0ef 100644
--- a/llvm/test/CodeGen/RISCV/split-udiv-by-constant.ll
+++ b/llvm/test/CodeGen/RISCV/split-udiv-by-constant.ll
@@ -35,7 +35,7 @@ define iXLen2 @test_udiv_3(iXLen2 %x) nounwind {
; RV64-NEXT: lui a3, 699051
; RV64-NEXT: lui a4, %hi(.LCPI0_0)
; RV64-NEXT: sltu a5, a2, a0
-; RV64-NEXT: addiw a3, a3, -1365
+; RV64-NEXT: addi a3, a3, -1365
; RV64-NEXT: ld a4, %lo(.LCPI0_0)(a4)
; RV64-NEXT: add a2, a2, a5
; RV64-NEXT: slli a5, a3, 32
@@ -90,7 +90,7 @@ define iXLen2 @test_udiv_5(iXLen2 %x) nounwind {
; RV64-NEXT: lui a3, 838861
; RV64-NEXT: lui a4, %hi(.LCPI1_0)
; RV64-NEXT: sltu a5, a2, a0
-; RV64-NEXT: addiw a3, a3, -819
+; RV64-NEXT: addi a3, a3, -819
; RV64-NEXT: ld a4, %lo(.LCPI1_0)(a4)
; RV64-NEXT: add a2, a2, a5
; RV64-NEXT: slli a5, a3, 32
@@ -200,9 +200,9 @@ define iXLen2 @test_udiv_15(iXLen2 %x) nounwind {
; RV64-NEXT: lui a4, %hi(.LCPI4_0)
; RV64-NEXT: lui a5, 978671
; RV64-NEXT: sltu a6, a2, a0
-; RV64-NEXT: addiw a3, a3, -1911
+; RV64-NEXT: addi a3, a3, -1911
; RV64-NEXT: ld a4, %lo(.LCPI4_0)(a4)
-; RV64-NEXT: addiw a5, a5, -273
+; RV64-NEXT: addi a5, a5, -273
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
; RV64-NEXT: add a3, a3, a6
@@ -258,7 +258,7 @@ define iXLen2 @test_udiv_17(iXLen2 %x) nounwind {
; RV64-NEXT: lui a3, 986895
; RV64-NEXT: lui a4, %hi(.LCPI5_0)
; RV64-NEXT: sltu a5, a2, a0
-; RV64-NEXT: addiw a3, a3, 241
+; RV64-NEXT: addi a3, a3, 241
; RV64-NEXT: ld a4, %lo(.LCPI5_0)(a4)
; RV64-NEXT: add a2, a2, a5
; RV64-NEXT: slli a5, a3, 32
@@ -316,9 +316,9 @@ define iXLen2 @test_udiv_255(iXLen2 %x) nounwind {
; RV64-NEXT: lui a4, %hi(.LCPI6_0)
; RV64-NEXT: lui a5, 1044464
; RV64-NEXT: sltu a6, a2, a0
-; RV64-NEXT: addiw a3, a3, 129
+; RV64-NEXT: addi a3, a3, 129
; RV64-NEXT: ld a4, %lo(.LCPI6_0)(a4)
-; RV64-NEXT: addiw a5, a5, -257
+; RV64-NEXT: addi a5, a5, -257
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
; RV64-NEXT: add a3, a3, a6
@@ -374,7 +374,7 @@ define iXLen2 @test_udiv_257(iXLen2 %x) nounwind {
; RV64-NEXT: lui a3, 1044496
; RV64-NEXT: lui a4, %hi(.LCPI7_0)
; RV64-NEXT: sltu a5, a2, a0
-; RV64-NEXT: addiw a3, a3, -255
+; RV64-NEXT: addi a3, a3, -255
; RV64-NEXT: ld a4, %lo(.LCPI7_0)(a4)
; RV64-NEXT: add a2, a2, a5
; RV64-NEXT: slli a5, a3, 32
@@ -435,9 +435,9 @@ define iXLen2 @test_udiv_65535(iXLen2 %x) nounwind {
; RV64-NEXT: lui a4, 983039
; RV64-NEXT: lui a5, 1048560
; RV64-NEXT: sltu a6, a2, a0
-; RV64-NEXT: addiw a3, a3, 1
+; RV64-NEXT: addi a3, a3, 1
; RV64-NEXT: slli a4, a4, 4
-; RV64-NEXT: addiw a5, a5, -1
+; RV64-NEXT: addi a5, a5, -1
; RV64-NEXT: add a2, a2, a6
; RV64-NEXT: slli a6, a3, 32
; RV64-NEXT: addi a4, a4, -1
@@ -496,7 +496,7 @@ define iXLen2 @test_udiv_65537(iXLen2 %x) nounwind {
; RV64-NEXT: lui a3, 1048560
; RV64-NEXT: lui a4, 983041
; RV64-NEXT: sltu a5, a2, a0
-; RV64-NEXT: addiw a6, a3, 1
+; RV64-NEXT: addi a6, a3, 1
; RV64-NEXT: slli a4, a4, 4
; RV64-NEXT: add a2, a2, a5
; RV64-NEXT: slli a5, a6, 32
@@ -559,7 +559,7 @@ define iXLen2 @test_udiv_12(iXLen2 %x) nounwind {
; RV64-NEXT: lui a3, 699051
; RV64-NEXT: lui a4, %hi(.LCPI10_0)
; RV64-NEXT: or a0, a0, a2
-; RV64-NEXT: addiw a2, a3, -1365
+; RV64-NEXT: addi a2, a3, -1365
; RV64-NEXT: ld a3, %lo(.LCPI10_0)(a4)
; RV64-NEXT: add a4, a0, a1
; RV64-NEXT: slli a5, a2, 32
diff --git a/llvm/test/CodeGen/RISCV/split-urem-by-constant.ll b/llvm/test/CodeGen/RISCV/split-urem-by-constant.ll
index ae8117c3ce0bd..bc4a99a00ac64 100644
--- a/llvm/test/CodeGen/RISCV/split-urem-by-constant.ll
+++ b/llvm/test/CodeGen/RISCV/split-urem-by-constant.ll
@@ -25,7 +25,7 @@ define iXLen2 @test_urem_3(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 699051
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, -1365
+; RV64-NEXT: addi a2, a2, -1365
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -61,7 +61,7 @@ define iXLen2 @test_urem_5(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 838861
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, -819
+; RV64-NEXT: addi a2, a2, -819
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -149,7 +149,7 @@ define iXLen2 @test_urem_15(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 559241
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, -1911
+; RV64-NEXT: addi a2, a2, -1911
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -185,7 +185,7 @@ define iXLen2 @test_urem_17(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 986895
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, 241
+; RV64-NEXT: addi a2, a2, 241
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -221,7 +221,7 @@ define iXLen2 @test_urem_255(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 526344
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, 129
+; RV64-NEXT: addi a2, a2, 129
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -257,7 +257,7 @@ define iXLen2 @test_urem_257(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 1044496
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, -255
+; RV64-NEXT: addi a2, a2, -255
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -293,7 +293,7 @@ define iXLen2 @test_urem_65535(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 524296
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a2, a2, 1
+; RV64-NEXT: addi a2, a2, 1
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a2, 32
; RV64-NEXT: add a1, a2, a1
@@ -329,7 +329,7 @@ define iXLen2 @test_urem_65537(iXLen2 %x) nounwind {
; RV64-NEXT: add a1, a0, a1
; RV64-NEXT: lui a2, 1048560
; RV64-NEXT: sltu a0, a1, a0
-; RV64-NEXT: addiw a3, a2, 1
+; RV64-NEXT: addi a3, a2, 1
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: slli a1, a3, 32
; RV64-NEXT: add a1, a3, a1
@@ -373,7 +373,7 @@ define iXLen2 @test_urem_12(iXLen2 %x) nounwind {
; RV64-NEXT: srli a3, a0, 2
; RV64-NEXT: lui a4, 699051
; RV64-NEXT: or a2, a3, a2
-; RV64-NEXT: addiw a3, a4, -1365
+; RV64-NEXT: addi a3, a4, -1365
; RV64-NEXT: slli a4, a3, 32
; RV64-NEXT: add a3, a3, a4
; RV64-NEXT: srli a1, a1, 2
diff --git a/llvm/test/CodeGen/RISCV/srem-lkk.ll b/llvm/test/CodeGen/RISCV/srem-lkk.ll
index 7c291bbceedc6..54a8f8625bbe0 100644
--- a/llvm/test/CodeGen/RISCV/srem-lkk.ll
+++ b/llvm/test/CodeGen/RISCV/srem-lkk.ll
@@ -43,7 +43,7 @@ define i32 @fold_srem_positive_odd(i32 %x) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: sext.w a1, a0
; RV64IM-NEXT: lui a2, 706409
-; RV64IM-NEXT: addiw a2, a2, 389
+; RV64IM-NEXT: addi a2, a2, 389
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a1, a1, 32
; RV64IM-NEXT: add a1, a1, a0
@@ -93,7 +93,7 @@ define i32 @fold_srem_positive_even(i32 %x) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: sext.w a1, a0
; RV64IM-NEXT: lui a2, 253241
-; RV64IM-NEXT: addiw a2, a2, -15
+; RV64IM-NEXT: addi a2, a2, -15
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a2, a1, 63
; RV64IM-NEXT: srai a1, a1, 40
@@ -141,7 +141,7 @@ define i32 @fold_srem_negative_odd(i32 %x) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: sext.w a1, a0
; RV64IM-NEXT: lui a2, 677296
-; RV64IM-NEXT: addiw a2, a2, -91
+; RV64IM-NEXT: addi a2, a2, -91
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a2, a1, 63
; RV64IM-NEXT: srai a1, a1, 40
@@ -182,7 +182,7 @@ define i32 @fold_srem_negative_even(i32 %x) nounwind {
; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: sext.w a0, a0
; RV64I-NEXT: lui a1, 1048570
-; RV64I-NEXT: addiw a1, a1, 1595
+; RV64I-NEXT: addi a1, a1, 1595
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload
; RV64I-NEXT: addi sp, sp, 16
@@ -192,7 +192,7 @@ define i32 @fold_srem_negative_even(i32 %x) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: sext.w a1, a0
; RV64IM-NEXT: lui a2, 1036895
-; RV64IM-NEXT: addiw a2, a2, 999
+; RV64IM-NEXT: addi a2, a2, 999
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a2, a1, 63
; RV64IM-NEXT: srai a1, a1, 40
@@ -269,7 +269,7 @@ define i32 @combine_srem_sdiv(i32 %x) nounwind {
; RV64IM: # %bb.0:
; RV64IM-NEXT: sext.w a1, a0
; RV64IM-NEXT: lui a2, 706409
-; RV64IM-NEXT: addiw a2, a2, 389
+; RV64IM-NEXT: addi a2, a2, 389
; RV64IM-NEXT: mul a1, a1, a2
; RV64IM-NEXT: srli a1, a1, 32
; RV64IM-NEXT: add a1, a1, a0
diff --git a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
index 17a09bf7dbe6c..93fb230f51ce1 100644
--- a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
@@ -63,7 +63,7 @@ define i1 @test_srem_odd(i29 %X) nounwind {
; RV64-NEXT: subw a1, a1, a0
; RV64-NEXT: slli a1, a1, 35
; RV64-NEXT: srli a1, a1, 35
-; RV64-NEXT: addiw a0, a3, -165
+; RV64-NEXT: addi a0, a3, -165
; RV64-NEXT: sltu a0, a1, a0
; RV64-NEXT: ret
;
@@ -93,7 +93,7 @@ define i1 @test_srem_odd(i29 %X) nounwind {
; RV64M-NEXT: lui a1, 1324
; RV64M-NEXT: slli a0, a0, 35
; RV64M-NEXT: srli a0, a0, 35
-; RV64M-NEXT: addiw a1, a1, -165
+; RV64M-NEXT: addi a1, a1, -165
; RV64M-NEXT: sltu a0, a0, a1
; RV64M-NEXT: ret
;
@@ -123,7 +123,7 @@ define i1 @test_srem_odd(i29 %X) nounwind {
; RV64MV-NEXT: lui a1, 1324
; RV64MV-NEXT: slli a0, a0, 35
; RV64MV-NEXT: srli a0, a0, 35
-; RV64MV-NEXT: addiw a1, a1, -165
+; RV64MV-NEXT: addi a1, a1, -165
; RV64MV-NEXT: sltu a0, a0, a1
; RV64MV-NEXT: ret
%srem = srem i29 %X, 99
@@ -612,7 +612,7 @@ define void @test_srem_vec(ptr %X) nounwind {
; RV64M-NEXT: lbu a3, 12(a0)
; RV64M-NEXT: lui a4, %hi(.LCPI3_0)
; RV64M-NEXT: lui a5, 699051
-; RV64M-NEXT: addiw a5, a5, -1365
+; RV64M-NEXT: addi a5, a5, -1365
; RV64M-NEXT: slli a6, a5, 32
; RV64M-NEXT: add a5, a5, a6
; RV64M-NEXT: srli a6, a1, 2
diff --git a/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll b/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
index cf65d4e0cf805..2be3324b86032 100644
--- a/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
+++ b/llvm/test/CodeGen/RISCV/srem-vector-lkk.ll
@@ -863,7 +863,7 @@ define <4 x i16> @dont_fold_srem_one(<4 x i16> %x) nounwind {
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a1, a0, 1327
+; RV64I-NEXT: addi a1, a0, 1327
; RV64I-NEXT: mv a0, s1
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: sh zero, 0(s2)
@@ -1016,7 +1016,7 @@ define <4 x i16> @dont_fold_urem_i16_smax(<4 x i16> %x) nounwind {
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: mv s2, a0
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a1, a0, 1327
+; RV64I-NEXT: addi a1, a0, 1327
; RV64I-NEXT: mv a0, s1
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: sh zero, 0(s0)
@@ -1233,7 +1233,7 @@ define <4 x i64> @dont_fold_srem_i64(<4 x i64> %x) nounwind {
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a1, a0, 1327
+; RV64I-NEXT: addi a1, a0, 1327
; RV64I-NEXT: mv a0, s1
; RV64I-NEXT: call __moddi3
; RV64I-NEXT: sd zero, 0(s2)
@@ -1275,7 +1275,7 @@ define <4 x i64> @dont_fold_srem_i64(<4 x i64> %x) nounwind {
; RV64IM-NEXT: li a7, 654
; RV64IM-NEXT: mul a5, a5, a7
; RV64IM-NEXT: lui a7, 1
-; RV64IM-NEXT: addiw a7, a7, 1327
+; RV64IM-NEXT: addi a7, a7, 1327
; RV64IM-NEXT: mul a6, a6, a7
; RV64IM-NEXT: li a7, 23
; RV64IM-NEXT: mul a4, a4, a7
diff --git a/llvm/test/CodeGen/RISCV/ssub_sat.ll b/llvm/test/CodeGen/RISCV/ssub_sat.ll
index cc5cd76e913c6..ba4d170c719fc 100644
--- a/llvm/test/CodeGen/RISCV/ssub_sat.ll
+++ b/llvm/test/CodeGen/RISCV/ssub_sat.ll
@@ -113,7 +113,7 @@ define signext i16 @func16(i16 signext %x, i16 signext %y) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: sub a0, a0, a1
; RV64I-NEXT: lui a1, 8
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: bge a0, a1, .LBB2_3
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: lui a1, 1048568
@@ -142,7 +142,7 @@ define signext i16 @func16(i16 signext %x, i16 signext %y) nounwind {
; RV64IZbb: # %bb.0:
; RV64IZbb-NEXT: sub a0, a0, a1
; RV64IZbb-NEXT: lui a1, 8
-; RV64IZbb-NEXT: addiw a1, a1, -1
+; RV64IZbb-NEXT: addi a1, a1, -1
; RV64IZbb-NEXT: min a0, a0, a1
; RV64IZbb-NEXT: lui a1, 1048568
; RV64IZbb-NEXT: max a0, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/ssub_sat_plus.ll b/llvm/test/CodeGen/RISCV/ssub_sat_plus.ll
index 0499992b71778..437c1e2a2e489 100644
--- a/llvm/test/CodeGen/RISCV/ssub_sat_plus.ll
+++ b/llvm/test/CodeGen/RISCV/ssub_sat_plus.ll
@@ -128,7 +128,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64I-NEXT: slli a1, a1, 48
; RV64I-NEXT: srai a1, a1, 48
; RV64I-NEXT: sub a0, a0, a1
-; RV64I-NEXT: addiw a1, a2, -1
+; RV64I-NEXT: addi a1, a2, -1
; RV64I-NEXT: bge a0, a1, .LBB2_3
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: lui a1, 1048568
@@ -162,7 +162,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64IZbb-NEXT: mul a1, a1, a2
; RV64IZbb-NEXT: lui a2, 8
; RV64IZbb-NEXT: sext.h a1, a1
-; RV64IZbb-NEXT: addiw a2, a2, -1
+; RV64IZbb-NEXT: addi a2, a2, -1
; RV64IZbb-NEXT: sub a0, a0, a1
; RV64IZbb-NEXT: min a0, a0, a2
; RV64IZbb-NEXT: lui a1, 1048568
diff --git a/llvm/test/CodeGen/RISCV/stack-clash-prologue-nounwind.ll b/llvm/test/CodeGen/RISCV/stack-clash-prologue-nounwind.ll
index 882d0b814063d..92b2afe2e72c4 100644
--- a/llvm/test/CodeGen/RISCV/stack-clash-prologue-nounwind.ll
+++ b/llvm/test/CodeGen/RISCV/stack-clash-prologue-nounwind.ll
@@ -44,7 +44,7 @@ define i8 @f1() #0 nounwind {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
@@ -86,7 +86,7 @@ define i8 @f2() #0 nounwind {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
@@ -131,7 +131,7 @@ define i8 @f3() #0 "stack-probe-size"="32768" nounwind {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
@@ -164,13 +164,13 @@ define i8 @f4() nounwind {
; RV64I-LABEL: f4:
; RV64I: # %bb.0: # %entry
; RV64I-NEXT: lui a0, 16
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: li a0, 3
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
@@ -211,7 +211,7 @@ define i8 @f5() #0 "stack-probe-size"="65536" nounwind {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 256
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
@@ -259,7 +259,7 @@ define i8 @f6() #0 nounwind {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 262144
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
@@ -303,13 +303,13 @@ define i8 @f7() #0 "stack-probe-size"="65536" nounwind {
; RV64I-NEXT: bne sp, t1, .LBB7_1
; RV64I-NEXT: # %bb.2: # %entry
; RV64I-NEXT: lui a0, 13
-; RV64I-NEXT: addiw a0, a0, -1520
+; RV64I-NEXT: addi a0, a0, -1520
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: li a0, 3
; RV64I-NEXT: sb a0, 9(sp)
; RV64I-NEXT: lbu a0, 9(sp)
; RV64I-NEXT: lui a1, 244141
-; RV64I-NEXT: addiw a1, a1, -1520
+; RV64I-NEXT: addi a1, a1, -1520
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll b/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll
index b1c0755c36ec1..b76f2b2464709 100644
--- a/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll
+++ b/llvm/test/CodeGen/RISCV/stack-clash-prologue.ll
@@ -50,7 +50,7 @@ define i8 @f1() #0 {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -99,7 +99,7 @@ define i8 @f2() #0 {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -152,7 +152,7 @@ define i8 @f3() #0 "stack-probe-size"="32768" {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -190,14 +190,14 @@ define i8 @f4() {
; RV64I-LABEL: f4:
; RV64I: # %bb.0: # %entry
; RV64I-NEXT: lui a0, 16
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 65552
; RV64I-NEXT: li a0, 3
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -244,7 +244,7 @@ define i8 @f5() #0 "stack-probe-size"="65536" {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 256
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -300,7 +300,7 @@ define i8 @f6() #0 {
; RV64I-NEXT: sb a0, 16(sp)
; RV64I-NEXT: lbu a0, 16(sp)
; RV64I-NEXT: lui a1, 262144
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
@@ -351,14 +351,14 @@ define i8 @f7() #0 "stack-probe-size"="65536" {
; RV64I-NEXT: # %bb.2: # %entry
; RV64I-NEXT: .cfi_def_cfa_register sp
; RV64I-NEXT: lui a0, 13
-; RV64I-NEXT: addiw a0, a0, -1520
+; RV64I-NEXT: addi a0, a0, -1520
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 1000000016
; RV64I-NEXT: li a0, 3
; RV64I-NEXT: sb a0, 9(sp)
; RV64I-NEXT: lbu a0, 9(sp)
; RV64I-NEXT: lui a1, 244141
-; RV64I-NEXT: addiw a1, a1, -1520
+; RV64I-NEXT: addi a1, a1, -1520
; RV64I-NEXT: add sp, sp, a1
; RV64I-NEXT: .cfi_def_cfa_offset 0
; RV64I-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/stack-inst-compress.mir b/llvm/test/CodeGen/RISCV/stack-inst-compress.mir
index fe84d29963353..bd31662ae2dac 100644
--- a/llvm/test/CodeGen/RISCV/stack-inst-compress.mir
+++ b/llvm/test/CodeGen/RISCV/stack-inst-compress.mir
@@ -300,13 +300,13 @@ body: |
; CHECK-RV64-NO-COM-NEXT: frame-setup SD killed $x1, $x2, 2024 :: (store (s64) into %stack.1)
; CHECK-RV64-NO-COM-NEXT: frame-setup CFI_INSTRUCTION offset $x1, -8
; CHECK-RV64-NO-COM-NEXT: $x10 = frame-setup LUI 2
- ; CHECK-RV64-NO-COM-NEXT: $x10 = frame-setup ADDIW killed $x10, -2016
+ ; CHECK-RV64-NO-COM-NEXT: $x10 = frame-setup ADDI killed $x10, -2016
; CHECK-RV64-NO-COM-NEXT: $x2 = frame-setup SUB $x2, killed $x10
; CHECK-RV64-NO-COM-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 8208
; CHECK-RV64-NO-COM-NEXT: renamable $x10 = ADDI $x2, 8
; CHECK-RV64-NO-COM-NEXT: PseudoCALL target-flags(riscv-call) @_Z6calleePi, csr_ilp32_lp64, implicit-def dead $x1, implicit killed $x10, implicit-def $x2
; CHECK-RV64-NO-COM-NEXT: $x10 = frame-destroy LUI 2
- ; CHECK-RV64-NO-COM-NEXT: $x10 = frame-destroy ADDIW killed $x10, -2016
+ ; CHECK-RV64-NO-COM-NEXT: $x10 = frame-destroy ADDI killed $x10, -2016
; CHECK-RV64-NO-COM-NEXT: $x2 = frame-destroy ADD $x2, killed $x10
; CHECK-RV64-NO-COM-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 2032
; CHECK-RV64-NO-COM-NEXT: $x1 = frame-destroy LD $x2, 2024 :: (load (s64) from %stack.1)
@@ -323,13 +323,13 @@ body: |
; CHECK-RV64-COM-NEXT: frame-setup SD killed $x1, $x2, 488 :: (store (s64) into %stack.1)
; CHECK-RV64-COM-NEXT: frame-setup CFI_INSTRUCTION offset $x1, -8
; CHECK-RV64-COM-NEXT: $x10 = frame-setup LUI 2
- ; CHECK-RV64-COM-NEXT: $x10 = frame-setup ADDIW killed $x10, -480
+ ; CHECK-RV64-COM-NEXT: $x10 = frame-setup ADDI killed $x10, -480
; CHECK-RV64-COM-NEXT: $x2 = frame-setup SUB $x2, killed $x10
; CHECK-RV64-COM-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 8208
; CHECK-RV64-COM-NEXT: renamable $x10 = ADDI $x2, 8
; CHECK-RV64-COM-NEXT: PseudoCALL target-flags(riscv-call) @_Z6calleePi, csr_ilp32_lp64, implicit-def dead $x1, implicit killed $x10, implicit-def $x2
; CHECK-RV64-COM-NEXT: $x10 = frame-destroy LUI 2
- ; CHECK-RV64-COM-NEXT: $x10 = frame-destroy ADDIW killed $x10, -480
+ ; CHECK-RV64-COM-NEXT: $x10 = frame-destroy ADDI killed $x10, -480
; CHECK-RV64-COM-NEXT: $x2 = frame-destroy ADD $x2, killed $x10
; CHECK-RV64-COM-NEXT: frame-destroy CFI_INSTRUCTION def_cfa_offset 496
; CHECK-RV64-COM-NEXT: $x1 = frame-destroy LD $x2, 488 :: (load (s64) from %stack.1)
diff --git a/llvm/test/CodeGen/RISCV/stack-offset.ll b/llvm/test/CodeGen/RISCV/stack-offset.ll
index 3dc4fcfe26a82..79b937a1a292c 100644
--- a/llvm/test/CodeGen/RISCV/stack-offset.ll
+++ b/llvm/test/CodeGen/RISCV/stack-offset.ll
@@ -101,10 +101,10 @@ define void @test() {
; RV64I-NEXT: addi a1, sp, 2047
; RV64I-NEXT: addi a1, a1, 9
; RV64I-NEXT: lui a2, 1
-; RV64I-NEXT: addiw a2, a2, 8
+; RV64I-NEXT: addi a2, a2, 8
; RV64I-NEXT: add a2, sp, a2
; RV64I-NEXT: lui a3, 1
-; RV64I-NEXT: addiw a3, a3, 1032
+; RV64I-NEXT: addi a3, a3, 1032
; RV64I-NEXT: add a3, sp, a3
; RV64I-NEXT: call inspect
; RV64I-NEXT: addi sp, sp, 2032
@@ -225,7 +225,7 @@ define void @align_8() {
; RV64I-NEXT: .cfi_def_cfa_offset 4128
; RV64I-NEXT: addi a0, sp, 15
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 16
+; RV64I-NEXT: addi a1, a1, 16
; RV64I-NEXT: add a1, sp, a1
; RV64I-NEXT: call inspect
; RV64I-NEXT: addi sp, sp, 2032
@@ -340,7 +340,7 @@ define void @align_4() {
; RV64I-NEXT: .cfi_def_cfa_offset 4128
; RV64I-NEXT: addi a0, sp, 19
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 20
+; RV64I-NEXT: addi a1, a1, 20
; RV64I-NEXT: add a1, sp, a1
; RV64I-NEXT: call inspect
; RV64I-NEXT: addi sp, sp, 2032
@@ -433,7 +433,7 @@ define void @align_2() {
; RV64-NEXT: .cfi_def_cfa_offset 4128
; RV64-NEXT: addi a0, sp, 21
; RV64-NEXT: lui a1, 1
-; RV64-NEXT: addiw a1, a1, 22
+; RV64-NEXT: addi a1, a1, 22
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: call inspect
; RV64-NEXT: addi sp, sp, 2032
@@ -505,7 +505,7 @@ define void @align_1() {
; RV64-NEXT: .cfi_def_cfa_offset 4128
; RV64-NEXT: addi a0, sp, 22
; RV64-NEXT: lui a1, 1
-; RV64-NEXT: addiw a1, a1, 23
+; RV64-NEXT: addi a1, a1, 23
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: call inspect
; RV64-NEXT: addi sp, sp, 2032
@@ -573,16 +573,16 @@ define void @align_1_lui() {
; RV64I-NEXT: sd ra, 2024(sp) # 8-byte Folded Spill
; RV64I-NEXT: .cfi_offset ra, -8
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 6144
; RV64I-NEXT: addi a0, sp, 20
; RV64I-NEXT: lui a1, 1
-; RV64I-NEXT: addiw a1, a1, 2039
+; RV64I-NEXT: addi a1, a1, 2039
; RV64I-NEXT: add a1, sp, a1
; RV64I-NEXT: call inspect
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a0, a0, 16
+; RV64I-NEXT: addi a0, a0, 16
; RV64I-NEXT: add sp, sp, a0
; RV64I-NEXT: .cfi_def_cfa_offset 2032
; RV64I-NEXT: ld ra, 2024(sp) # 8-byte Folded Reload
@@ -602,7 +602,7 @@ define void @align_1_lui() {
; RV64ZBA-NEXT: .cfi_def_cfa_offset 6144
; RV64ZBA-NEXT: addi a0, sp, 20
; RV64ZBA-NEXT: lui a1, 1
-; RV64ZBA-NEXT: addiw a1, a1, 2039
+; RV64ZBA-NEXT: addi a1, a1, 2039
; RV64ZBA-NEXT: add a1, sp, a1
; RV64ZBA-NEXT: call inspect
; RV64ZBA-NEXT: li a0, 514
diff --git a/llvm/test/CodeGen/RISCV/stack-realignment.ll b/llvm/test/CodeGen/RISCV/stack-realignment.ll
index 368840933f04b..55e1c62c66704 100644
--- a/llvm/test/CodeGen/RISCV/stack-realignment.ll
+++ b/llvm/test/CodeGen/RISCV/stack-realignment.ll
@@ -1340,7 +1340,7 @@ define void @caller4096() {
; RV64I-NEXT: addi s0, sp, 2032
; RV64I-NEXT: .cfi_def_cfa s0, 0
; RV64I-NEXT: lui a0, 2
-; RV64I-NEXT: addiw a0, a0, -2032
+; RV64I-NEXT: addi a0, a0, -2032
; RV64I-NEXT: sub sp, sp, a0
; RV64I-NEXT: srli a0, sp, 12
; RV64I-NEXT: slli sp, a0, 12
@@ -1368,7 +1368,7 @@ define void @caller4096() {
; RV64I-LP64E-NEXT: addi s0, sp, 2040
; RV64I-LP64E-NEXT: .cfi_def_cfa s0, 0
; RV64I-LP64E-NEXT: lui a0, 2
-; RV64I-LP64E-NEXT: addiw a0, a0, -2040
+; RV64I-LP64E-NEXT: addi a0, a0, -2040
; RV64I-LP64E-NEXT: sub sp, sp, a0
; RV64I-LP64E-NEXT: srli a0, sp, 12
; RV64I-LP64E-NEXT: slli sp, a0, 12
diff --git a/llvm/test/CodeGen/RISCV/switch-width.ll b/llvm/test/CodeGen/RISCV/switch-width.ll
index d902bd3276a3c..c570e737780d8 100644
--- a/llvm/test/CodeGen/RISCV/switch-width.ll
+++ b/llvm/test/CodeGen/RISCV/switch-width.ll
@@ -119,7 +119,7 @@ define i32 @trunc_i17(i64 %a) {
; CHECK-LABEL: trunc_i17:
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: lui a1, 32
-; CHECK-NEXT: addiw a1, a1, -1
+; CHECK-NEXT: addi a1, a1, -1
; CHECK-NEXT: and a0, a0, a1
; CHECK-NEXT: beq a0, a1, .LBB3_3
; CHECK-NEXT: # %bb.1: # %entry
@@ -159,7 +159,7 @@ define i32 @trunc_i16(i64 %a) {
; CHECK-LABEL: trunc_i16:
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: lui a1, 16
-; CHECK-NEXT: addiw a1, a1, -1
+; CHECK-NEXT: addi a1, a1, -1
; CHECK-NEXT: and a0, a0, a1
; CHECK-NEXT: beq a0, a1, .LBB4_3
; CHECK-NEXT: # %bb.1: # %entry
@@ -200,7 +200,7 @@ define i32 @trunc_i12(i64 %a) {
; CHECK-LABEL: trunc_i12:
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: lui a1, 1
-; CHECK-NEXT: addiw a1, a1, -1
+; CHECK-NEXT: addi a1, a1, -1
; CHECK-NEXT: and a0, a0, a1
; CHECK-NEXT: beq a0, a1, .LBB5_3
; CHECK-NEXT: # %bb.1: # %entry
diff --git a/llvm/test/CodeGen/RISCV/trunc-nsw-nuw.ll b/llvm/test/CodeGen/RISCV/trunc-nsw-nuw.ll
index 9f81ff8c8d31a..8fb69693adb74 100644
--- a/llvm/test/CodeGen/RISCV/trunc-nsw-nuw.ll
+++ b/llvm/test/CodeGen/RISCV/trunc-nsw-nuw.ll
@@ -17,7 +17,7 @@ define signext i32 @trunc_nuw_nsw_urem(i64 %x) nounwind {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: lui a1, 210
; CHECK-NEXT: lui a2, 2
-; CHECK-NEXT: addiw a1, a1, -1167
+; CHECK-NEXT: addi a1, a1, -1167
; CHECK-NEXT: slli a1, a1, 12
; CHECK-NEXT: addi a1, a1, 1881
; CHECK-NEXT: mul a1, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/uadd_sat.ll b/llvm/test/CodeGen/RISCV/uadd_sat.ll
index dbcb68eb3c9eb..ee591a1784635 100644
--- a/llvm/test/CodeGen/RISCV/uadd_sat.ll
+++ b/llvm/test/CodeGen/RISCV/uadd_sat.ll
@@ -109,7 +109,7 @@ define zeroext i16 @func16(i16 zeroext %x, i16 zeroext %y) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: add a0, a0, a1
; RV64I-NEXT: lui a1, 16
-; RV64I-NEXT: addiw a1, a1, -1
+; RV64I-NEXT: addi a1, a1, -1
; RV64I-NEXT: bltu a0, a1, .LBB2_2
; RV64I-NEXT: # %bb.1:
; RV64I-NEXT: mv a0, a1
@@ -128,7 +128,7 @@ define zeroext i16 @func16(i16 zeroext %x, i16 zeroext %y) nounwind {
; RV64IZbb: # %bb.0:
; RV64IZbb-NEXT: add a0, a0, a1
; RV64IZbb-NEXT: lui a1, 16
-; RV64IZbb-NEXT: addiw a1, a1, -1
+; RV64IZbb-NEXT: addi a1, a1, -1
; RV64IZbb-NEXT: minu a0, a0, a1
; RV64IZbb-NEXT: ret
%tmp = call i16 @llvm.uadd.sat.i16(i16 %x, i16 %y);
diff --git a/llvm/test/CodeGen/RISCV/uadd_sat_plus.ll b/llvm/test/CodeGen/RISCV/uadd_sat_plus.ll
index 65b3d763fd72f..da29d26b7147f 100644
--- a/llvm/test/CodeGen/RISCV/uadd_sat_plus.ll
+++ b/llvm/test/CodeGen/RISCV/uadd_sat_plus.ll
@@ -120,7 +120,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a3, 16
; RV64I-NEXT: mul a2, a1, a2
-; RV64I-NEXT: addiw a1, a3, -1
+; RV64I-NEXT: addi a1, a3, -1
; RV64I-NEXT: and a0, a0, a1
; RV64I-NEXT: and a2, a2, a1
; RV64I-NEXT: add a0, a0, a2
@@ -148,7 +148,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64IZbb-NEXT: lui a2, 16
; RV64IZbb-NEXT: zext.h a1, a1
; RV64IZbb-NEXT: add a0, a0, a1
-; RV64IZbb-NEXT: addiw a2, a2, -1
+; RV64IZbb-NEXT: addi a2, a2, -1
; RV64IZbb-NEXT: minu a0, a0, a2
; RV64IZbb-NEXT: ret
%a = mul i16 %y, %z
diff --git a/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
index 46e250710f9c1..d33c6662ceb5c 100644
--- a/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll
@@ -148,7 +148,7 @@ define i1 @test_urem_even(i27 %X) nounwind {
; RV64-NEXT: or a0, a0, a1
; RV64-NEXT: slli a0, a0, 37
; RV64-NEXT: srli a0, a0, 37
-; RV64-NEXT: addiw a1, a2, -1755
+; RV64-NEXT: addi a1, a2, -1755
; RV64-NEXT: sltu a0, a0, a1
; RV64-NEXT: ret
;
@@ -180,7 +180,7 @@ define i1 @test_urem_even(i27 %X) nounwind {
; RV64M-NEXT: lui a1, 2341
; RV64M-NEXT: slli a0, a0, 37
; RV64M-NEXT: srli a0, a0, 37
-; RV64M-NEXT: addiw a1, a1, -1755
+; RV64M-NEXT: addi a1, a1, -1755
; RV64M-NEXT: sltu a0, a0, a1
; RV64M-NEXT: ret
;
@@ -212,7 +212,7 @@ define i1 @test_urem_even(i27 %X) nounwind {
; RV64MV-NEXT: lui a1, 2341
; RV64MV-NEXT: slli a0, a0, 37
; RV64MV-NEXT: srli a0, a0, 37
-; RV64MV-NEXT: addiw a1, a1, -1755
+; RV64MV-NEXT: addi a1, a1, -1755
; RV64MV-NEXT: sltu a0, a0, a1
; RV64MV-NEXT: ret
%urem = urem i27 %X, 14
diff --git a/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll b/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
index 988856ca70923..e8e91f0baee14 100644
--- a/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
+++ b/llvm/test/CodeGen/RISCV/urem-vector-lkk.ll
@@ -721,7 +721,7 @@ define <4 x i16> @dont_fold_urem_one(<4 x i16> %x) nounwind {
; RV64I-NEXT: call __umoddi3
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a1, a0, 1327
+; RV64I-NEXT: addi a1, a0, 1327
; RV64I-NEXT: mv a0, s1
; RV64I-NEXT: call __umoddi3
; RV64I-NEXT: sh zero, 0(s2)
@@ -941,7 +941,7 @@ define <4 x i64> @dont_fold_urem_i64(<4 x i64> %x) nounwind {
; RV64I-NEXT: call __umoddi3
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: lui a0, 1
-; RV64I-NEXT: addiw a1, a0, 1327
+; RV64I-NEXT: addi a1, a0, 1327
; RV64I-NEXT: mv a0, s1
; RV64I-NEXT: call __umoddi3
; RV64I-NEXT: sd zero, 0(s2)
@@ -973,7 +973,7 @@ define <4 x i64> @dont_fold_urem_i64(<4 x i64> %x) nounwind {
; RV64IM-NEXT: srli a4, a4, 7
; RV64IM-NEXT: mul a4, a4, a6
; RV64IM-NEXT: lui a6, 1
-; RV64IM-NEXT: addiw a6, a6, 1327
+; RV64IM-NEXT: addi a6, a6, 1327
; RV64IM-NEXT: mulhu a5, a3, a5
; RV64IM-NEXT: mulhu a7, a1, a7
; RV64IM-NEXT: srli a7, a7, 12
diff --git a/llvm/test/CodeGen/RISCV/usub_sat_plus.ll b/llvm/test/CodeGen/RISCV/usub_sat_plus.ll
index aa42568c539ba..3285349ea068a 100644
--- a/llvm/test/CodeGen/RISCV/usub_sat_plus.ll
+++ b/llvm/test/CodeGen/RISCV/usub_sat_plus.ll
@@ -122,7 +122,7 @@ define i16 @func16(i16 %x, i16 %y, i16 %z) nounwind {
; RV64I: # %bb.0:
; RV64I-NEXT: lui a3, 16
; RV64I-NEXT: mul a1, a1, a2
-; RV64I-NEXT: addiw a3, a3, -1
+; RV64I-NEXT: addi a3, a3, -1
; RV64I-NEXT: and a0, a0, a3
; RV64I-NEXT: and a1, a1, a3
; RV64I-NEXT: sub a1, a0, a1
diff --git a/llvm/test/CodeGen/RISCV/vararg.ll b/llvm/test/CodeGen/RISCV/vararg.ll
index 895d84b38be32..27035c65ee3a6 100644
--- a/llvm/test/CodeGen/RISCV/vararg.ll
+++ b/llvm/test/CodeGen/RISCV/vararg.ll
@@ -2520,7 +2520,7 @@ define void @va5_aligned_stack_caller() nounwind {
; LP64-LP64F-LP64D-FPELIM-NEXT: ld t4, %lo(.LCPI11_0)(a2)
; LP64-LP64F-LP64D-FPELIM-NEXT: ld a2, %lo(.LCPI11_1)(a3)
; LP64-LP64F-LP64D-FPELIM-NEXT: ld a3, %lo(.LCPI11_2)(a6)
-; LP64-LP64F-LP64D-FPELIM-NEXT: addiw a6, t3, 761
+; LP64-LP64F-LP64D-FPELIM-NEXT: addi a6, t3, 761
; LP64-LP64F-LP64D-FPELIM-NEXT: slli a6, a6, 11
; LP64-LP64F-LP64D-FPELIM-NEXT: sd t4, 0(sp)
; LP64-LP64F-LP64D-FPELIM-NEXT: sd t2, 8(sp)
@@ -2552,7 +2552,7 @@ define void @va5_aligned_stack_caller() nounwind {
; LP64-LP64F-LP64D-WITHFP-NEXT: ld t4, %lo(.LCPI11_0)(a2)
; LP64-LP64F-LP64D-WITHFP-NEXT: ld a2, %lo(.LCPI11_1)(a3)
; LP64-LP64F-LP64D-WITHFP-NEXT: ld a3, %lo(.LCPI11_2)(a6)
-; LP64-LP64F-LP64D-WITHFP-NEXT: addiw a6, t3, 761
+; LP64-LP64F-LP64D-WITHFP-NEXT: addi a6, t3, 761
; LP64-LP64F-LP64D-WITHFP-NEXT: slli a6, a6, 11
; LP64-LP64F-LP64D-WITHFP-NEXT: sd t4, 0(sp)
; LP64-LP64F-LP64D-WITHFP-NEXT: sd t2, 8(sp)
@@ -2583,7 +2583,7 @@ define void @va5_aligned_stack_caller() nounwind {
; LP64E-FPELIM-NEXT: sd a2, 40(sp)
; LP64E-FPELIM-NEXT: li a5, 13
; LP64E-FPELIM-NEXT: ld a7, %lo(.LCPI11_0)(a7)
-; LP64E-FPELIM-NEXT: addiw t1, t1, 761
+; LP64E-FPELIM-NEXT: addi t1, t1, 761
; LP64E-FPELIM-NEXT: ld a2, %lo(.LCPI11_1)(t2)
; LP64E-FPELIM-NEXT: ld a3, %lo(.LCPI11_2)(t3)
; LP64E-FPELIM-NEXT: slli t1, t1, 11
@@ -2617,7 +2617,7 @@ define void @va5_aligned_stack_caller() nounwind {
; LP64E-WITHFP-NEXT: sd a2, 40(sp)
; LP64E-WITHFP-NEXT: li a5, 13
; LP64E-WITHFP-NEXT: ld a7, %lo(.LCPI11_0)(a7)
-; LP64E-WITHFP-NEXT: addiw t1, t1, 761
+; LP64E-WITHFP-NEXT: addi t1, t1, 761
; LP64E-WITHFP-NEXT: ld a2, %lo(.LCPI11_1)(t2)
; LP64E-WITHFP-NEXT: ld a3, %lo(.LCPI11_2)(t3)
; LP64E-WITHFP-NEXT: slli t1, t1, 11
@@ -2989,14 +2989,14 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64-LP64F-LP64D-FPELIM-LABEL: va_large_stack:
; LP64-LP64F-LP64D-FPELIM: # %bb.0:
; LP64-LP64F-LP64D-FPELIM-NEXT: lui a0, 24414
-; LP64-LP64F-LP64D-FPELIM-NEXT: addiw a0, a0, 336
+; LP64-LP64F-LP64D-FPELIM-NEXT: addi a0, a0, 336
; LP64-LP64F-LP64D-FPELIM-NEXT: sub sp, sp, a0
; LP64-LP64F-LP64D-FPELIM-NEXT: .cfi_def_cfa_offset 100000080
; LP64-LP64F-LP64D-FPELIM-NEXT: lui a0, 24414
; LP64-LP64F-LP64D-FPELIM-NEXT: add a0, sp, a0
; LP64-LP64F-LP64D-FPELIM-NEXT: sd a1, 280(a0)
; LP64-LP64F-LP64D-FPELIM-NEXT: lui a0, 24414
-; LP64-LP64F-LP64D-FPELIM-NEXT: addiw a0, a0, 284
+; LP64-LP64F-LP64D-FPELIM-NEXT: addi a0, a0, 284
; LP64-LP64F-LP64D-FPELIM-NEXT: add a0, sp, a0
; LP64-LP64F-LP64D-FPELIM-NEXT: sd a0, 8(sp)
; LP64-LP64F-LP64D-FPELIM-NEXT: lui a0, 24414
@@ -3021,7 +3021,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64-LP64F-LP64D-FPELIM-NEXT: add a1, sp, a1
; LP64-LP64F-LP64D-FPELIM-NEXT: sd a4, 304(a1)
; LP64-LP64F-LP64D-FPELIM-NEXT: lui a1, 24414
-; LP64-LP64F-LP64D-FPELIM-NEXT: addiw a1, a1, 336
+; LP64-LP64F-LP64D-FPELIM-NEXT: addi a1, a1, 336
; LP64-LP64F-LP64D-FPELIM-NEXT: add sp, sp, a1
; LP64-LP64F-LP64D-FPELIM-NEXT: .cfi_def_cfa_offset 0
; LP64-LP64F-LP64D-FPELIM-NEXT: ret
@@ -3037,7 +3037,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64-LP64F-LP64D-WITHFP-NEXT: addi s0, sp, 1968
; LP64-LP64F-LP64D-WITHFP-NEXT: .cfi_def_cfa s0, 64
; LP64-LP64F-LP64D-WITHFP-NEXT: lui a0, 24414
-; LP64-LP64F-LP64D-WITHFP-NEXT: addiw a0, a0, -1680
+; LP64-LP64F-LP64D-WITHFP-NEXT: addi a0, a0, -1680
; LP64-LP64F-LP64D-WITHFP-NEXT: sub sp, sp, a0
; LP64-LP64F-LP64D-WITHFP-NEXT: sd a1, 8(s0)
; LP64-LP64F-LP64D-WITHFP-NEXT: addi a0, s0, 12
@@ -3052,7 +3052,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64-LP64F-LP64D-WITHFP-NEXT: sd a3, 24(s0)
; LP64-LP64F-LP64D-WITHFP-NEXT: sd a4, 32(s0)
; LP64-LP64F-LP64D-WITHFP-NEXT: lui a1, 24414
-; LP64-LP64F-LP64D-WITHFP-NEXT: addiw a1, a1, -1680
+; LP64-LP64F-LP64D-WITHFP-NEXT: addi a1, a1, -1680
; LP64-LP64F-LP64D-WITHFP-NEXT: add sp, sp, a1
; LP64-LP64F-LP64D-WITHFP-NEXT: .cfi_def_cfa sp, 2032
; LP64-LP64F-LP64D-WITHFP-NEXT: ld ra, 1960(sp) # 8-byte Folded Reload
@@ -3066,11 +3066,11 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64E-FPELIM-LABEL: va_large_stack:
; LP64E-FPELIM: # %bb.0:
; LP64E-FPELIM-NEXT: lui a0, 24414
-; LP64E-FPELIM-NEXT: addiw a0, a0, 320
+; LP64E-FPELIM-NEXT: addi a0, a0, 320
; LP64E-FPELIM-NEXT: sub sp, sp, a0
; LP64E-FPELIM-NEXT: .cfi_def_cfa_offset 100000064
; LP64E-FPELIM-NEXT: lui a0, 24414
-; LP64E-FPELIM-NEXT: addiw a0, a0, 284
+; LP64E-FPELIM-NEXT: addi a0, a0, 284
; LP64E-FPELIM-NEXT: add a0, sp, a0
; LP64E-FPELIM-NEXT: sd a0, 8(sp)
; LP64E-FPELIM-NEXT: lui a0, 24414
@@ -3092,7 +3092,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64E-FPELIM-NEXT: add a1, sp, a1
; LP64E-FPELIM-NEXT: sd a4, 304(a1)
; LP64E-FPELIM-NEXT: lui a1, 24414
-; LP64E-FPELIM-NEXT: addiw a1, a1, 320
+; LP64E-FPELIM-NEXT: addi a1, a1, 320
; LP64E-FPELIM-NEXT: add sp, sp, a1
; LP64E-FPELIM-NEXT: .cfi_def_cfa_offset 0
; LP64E-FPELIM-NEXT: ret
@@ -3108,7 +3108,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64E-WITHFP-NEXT: addi s0, sp, 1992
; LP64E-WITHFP-NEXT: .cfi_def_cfa s0, 48
; LP64E-WITHFP-NEXT: lui a0, 24414
-; LP64E-WITHFP-NEXT: addiw a0, a0, -1704
+; LP64E-WITHFP-NEXT: addi a0, a0, -1704
; LP64E-WITHFP-NEXT: sub sp, sp, a0
; LP64E-WITHFP-NEXT: addi a0, s0, 12
; LP64E-WITHFP-NEXT: lui a6, 24414
@@ -3121,7 +3121,7 @@ define i32 @va_large_stack(ptr %fmt, ...) {
; LP64E-WITHFP-NEXT: sd a3, 24(s0)
; LP64E-WITHFP-NEXT: sd a4, 32(s0)
; LP64E-WITHFP-NEXT: lui a1, 24414
-; LP64E-WITHFP-NEXT: addiw a1, a1, -1704
+; LP64E-WITHFP-NEXT: addi a1, a1, -1704
; LP64E-WITHFP-NEXT: add sp, sp, a1
; LP64E-WITHFP-NEXT: .cfi_def_cfa sp, 2040
; LP64E-WITHFP-NEXT: ld ra, 1984(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/varargs-with-fp-and-second-adj.ll b/llvm/test/CodeGen/RISCV/varargs-with-fp-and-second-adj.ll
index 73d3827b84758..c8c364208da90 100644
--- a/llvm/test/CodeGen/RISCV/varargs-with-fp-and-second-adj.ll
+++ b/llvm/test/CodeGen/RISCV/varargs-with-fp-and-second-adj.ll
@@ -16,7 +16,7 @@ define dso_local void @_Z3fooPKcz(ptr noundef %0, ...) "frame-pointer"="all" {
; RV64V-NEXT: addi s0, sp, 432
; RV64V-NEXT: .cfi_def_cfa s0, 64
; RV64V-NEXT: lui t0, 2
-; RV64V-NEXT: addiw t0, t0, -576
+; RV64V-NEXT: addi t0, t0, -576
; RV64V-NEXT: sub sp, sp, t0
; RV64V-NEXT: sd a5, 40(s0)
; RV64V-NEXT: sd a6, 48(s0)
diff --git a/llvm/test/CodeGen/RISCV/zbb-logic-neg-imm.ll b/llvm/test/CodeGen/RISCV/zbb-logic-neg-imm.ll
index 449e983fb6b52..44f30440a3504 100644
--- a/llvm/test/CodeGen/RISCV/zbb-logic-neg-imm.ll
+++ b/llvm/test/CodeGen/RISCV/zbb-logic-neg-imm.ll
@@ -97,7 +97,7 @@ define i32 @addorlow16(i32 %x) {
; RV64-LABEL: addorlow16:
; RV64: # %bb.0:
; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
+; RV64-NEXT: addi a1, a1, -1
; RV64-NEXT: addw a0, a0, a1
; RV64-NEXT: or a0, a0, a1
; RV64-NEXT: ret
@@ -107,19 +107,12 @@ define i32 @addorlow16(i32 %x) {
}
define i32 @andxorlow16(i32 %x) {
-; RV32-LABEL: andxorlow16:
-; RV32: # %bb.0:
-; RV32-NEXT: lui a1, 16
-; RV32-NEXT: addi a1, a1, -1
-; RV32-NEXT: andn a0, a1, a0
-; RV32-NEXT: ret
-;
-; RV64-LABEL: andxorlow16:
-; RV64: # %bb.0:
-; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addiw a1, a1, -1
-; RV64-NEXT: andn a0, a1, a0
-; RV64-NEXT: ret
+; CHECK-LABEL: andxorlow16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: lui a1, 16
+; CHECK-NEXT: addi a1, a1, -1
+; CHECK-NEXT: andn a0, a1, a0
+; CHECK-NEXT: ret
%and = and i32 %x, 65535
%xor = xor i32 %and, 65535
ret i32 %xor
diff --git a/llvm/test/MC/RISCV/rv64c-aliases-valid.s b/llvm/test/MC/RISCV/rv64c-aliases-valid.s
index a464c24609fba..e0fd98cf3739b 100644
--- a/llvm/test/MC/RISCV/rv64c-aliases-valid.s
+++ b/llvm/test/MC/RISCV/rv64c-aliases-valid.s
@@ -30,32 +30,32 @@ li x11, 2048
# CHECK-EXPAND: addi a1, zero, -2048
li x11, -2048
# CHECK-EXPAND: c.lui a1, 1
-# CHECK-EXPAND: addiw a1, a1, -2047
+# CHECK-EXPAND: addi a1, a1, -2047
li x11, 2049
# CHECK-EXPAND: c.lui a1, 1048575
-# CHECK-EXPAND: addiw a1, a1, 2047
+# CHECK-EXPAND: addi a1, a1, 2047
li x11, -2049
# CHECK-EXPAND: c.lui a1, 1
-# CHECK-EXPAND: c.addiw a1, -1
+# CHECK-EXPAND: c.addi a1, -1
li x11, 4095
# CHECK-EXPAND: lui a1, 1048575
-# CHECK-EXPAND: c.addiw a1, 1
+# CHECK-EXPAND: c.addi a1, 1
li x11, -4095
# CHECK-EXPAND: c.lui a2, 1
li x12, 4096
# CHECK-EXPAND: lui a2, 1048575
li x12, -4096
# CHECK-EXPAND: c.lui a2, 1
-# CHECK-EXPAND: c.addiw a2, 1
+# CHECK-EXPAND: c.addi a2, 1
li x12, 4097
# CHECK-EXPAND: lui a2, 1048575
-# CHECK-EXPAND: c.addiw a2, -1
+# CHECK-EXPAND: c.addi a2, -1
li x12, -4097
# CHECK-EXPAND: lui a2, 524288
# CHECK-EXPAND: c.addiw a2, -1
li x12, 2147483647
# CHECK-EXPAND: lui a2, 524288
-# CHECK-EXPAND: c.addiw a2, 1
+# CHECK-EXPAND: c.addi a2, 1
li x12, -2147483647
# CHECK-EXPAND: lui a2, 524288
li x12, -2147483648
@@ -79,7 +79,7 @@ li t1, 0x8000000000000000
# CHECK-EXPAND: c.slli t1, 63
li t1, -0x8000000000000000
# CHECK-EXPAND: lui t2, 9321
-# CHECK-EXPAND: addiw t2, t2, -1329
+# CHECK-EXPAND: addi t2, t2, -1329
# CHECK-EXPAND: c.slli t2, 35
li t2, 0x1234567800000000
# CHECK-EXPAND: c.li t3, 7
@@ -89,7 +89,7 @@ li t2, 0x1234567800000000
# CHECK-EXPAND: c.addi t3, 15
li t3, 0x700000000B00000F
# CHECK-EXPAND: lui t4, 583
-# CHECK-EXPAND: addiw t4, t4, -1875
+# CHECK-EXPAND: addi t4, t4, -1875
# CHECK-EXPAND: c.slli t4, 14
# CHECK-EXPAND: addi t4, t4, -947
# CHECK-EXPAND: c.slli t4, 12
diff --git a/llvm/test/MC/RISCV/rv64i-aliases-valid.s b/llvm/test/MC/RISCV/rv64i-aliases-valid.s
index b3956c252447e..2a37caa0c5c2a 100644
--- a/llvm/test/MC/RISCV/rv64i-aliases-valid.s
+++ b/llvm/test/MC/RISCV/rv64i-aliases-valid.s
@@ -48,32 +48,32 @@ li x11, 2048
# CHECK-ALIAS: li a1, -2048
li x11, -2048
# CHECK-EXPAND: lui a1, 1
-# CHECK-EXPAND: addiw a1, a1, -2047
+# CHECK-EXPAND: addi a1, a1, -2047
li x11, 2049
# CHECK-EXPAND: lui a1, 1048575
-# CHECK-EXPAND: addiw a1, a1, 2047
+# CHECK-EXPAND: addi a1, a1, 2047
li x11, -2049
# CHECK-EXPAND: lui a1, 1
-# CHECK-EXPAND: addiw a1, a1, -1
+# CHECK-EXPAND: addi a1, a1, -1
li x11, 4095
# CHECK-EXPAND: lui a1, 1048575
-# CHECK-EXPAND: addiw a1, a1, 1
+# CHECK-EXPAND: addi a1, a1, 1
li x11, -4095
# CHECK-EXPAND: lui a2, 1
li x12, 4096
# CHECK-EXPAND: lui a2, 1048575
li x12, -4096
# CHECK-EXPAND: lui a2, 1
-# CHECK-EXPAND-NEXT: addiw a2, a2, 1
+# CHECK-EXPAND-NEXT: addi a2, a2, 1
li x12, 4097
# CHECK-EXPAND: lui a2, 1048575
-# CHECK-EXPAND: addiw a2, a2, -1
+# CHECK-EXPAND: addi a2, a2, -1
li x12, -4097
# CHECK-EXPAND: lui a2, 524288
# CHECK-EXPAND-NEXT: addiw a2, a2, -1
li x12, 2147483647
# CHECK-EXPAND: lui a2, 524288
-# CHECK-EXPAND-NEXT: addiw a2, a2, 1
+# CHECK-EXPAND-NEXT: addi a2, a2, 1
li x12, -2147483647
# CHECK-EXPAND: lui a2, 524288
li x12, -2147483648
@@ -107,7 +107,7 @@ li t1, 0x8000000000000000
# CHECK-ALIAS-NEXT: slli t1, t1, 63
li t1, -0x8000000000000000
# CHECK-EXPAND: lui t2, 9321
-# CHECK-EXPAND-NEXT: addiw t2, t2, -1329
+# CHECK-EXPAND-NEXT: addi t2, t2, -1329
# CHECK-EXPAND-NEXT: slli t2, t2, 35
li t2, 0x1234567800000000
# CHECK-INST: addi t3, zero, 7
@@ -122,7 +122,7 @@ li t2, 0x1234567800000000
# CHECK-ALIAS-NEXT: addi t3, t3, 15
li t3, 0x700000000B00000F
# CHECK-EXPAND: lui t4, 583
-# CHECK-EXPAND-NEXT: addiw t4, t4, -1875
+# CHECK-EXPAND-NEXT: addi t4, t4, -1875
# CHECK-EXPAND-NEXT: slli t4, t4, 14
# CHECK-EXPAND-NEXT: addi t4, t4, -947
# CHECK-EXPAND-NEXT: slli t4, t4, 12
@@ -171,18 +171,18 @@ li x10, 0xE000000001FFFFFF
li x11, 0xFFFC007FFFFFF7FF
# CHECK-INST: lui a2, 349525
-# CHECK-INST-NEXT: addiw a2, a2, 1365
+# CHECK-INST-NEXT: addi a2, a2, 1365
# CHECK-INST-NEXT: slli a2, a2, 1
# CHECK-ALIAS: lui a2, 349525
-# CHECK-ALIAS-NEXT: addiw a2, a2, 1365
+# CHECK-ALIAS-NEXT: addi a2, a2, 1365
# CHECK-ALIAS-NEXT: slli a2, a2, 1
li x12, 0xaaaaaaaa
# CHECK-INST: lui a3, 699051
-# CHECK-INST-NEXT: addiw a3, a3, -1365
+# CHECK-INST-NEXT: addi a3, a3, -1365
# CHECK-INST-NEXT: slli a3, a3, 1
# CHECK-ALIAS: lui a3, 699051
-# CHECK-ALIAS-NEXT: addiw a3, a3, -1365
+# CHECK-ALIAS-NEXT: addi a3, a3, -1365
# CHECK-ALIAS-NEXT: slli a3, a3, 1
li x13, 0xffffffff55555556
@@ -205,12 +205,12 @@ li a0, %pcrel_lo(.Lpcrel_hi0)
.equ CONST, 0x123456
# CHECK-EXPAND: lui a0, 291
-# CHECK-EXPAND: addiw a0, a0, 1110
+# CHECK-EXPAND: addi a0, a0, 1110
li a0, CONST
.equ CONST, 0x654321
# CHECK-EXPAND: lui a0, 1620
-# CHECK-EXPAND: addiw a0, a0, 801
+# CHECK-EXPAND: addi a0, a0, 801
li a0, CONST
.equ CONST, .Lbuf_end - .Lbuf
@@ -255,19 +255,19 @@ lla x11, 2048
la x11, -2048
lla x11, -2048
# CHECK-EXPAND: lui a1, 1
-# CHECK-EXPAND: addiw a1, a1, -2047
+# CHECK-EXPAND: addi a1, a1, -2047
la x11, 2049
lla x11, 2049
# CHECK-EXPAND: lui a1, 1048575
-# CHECK-EXPAND: addiw a1, a1, 2047
+# CHECK-EXPAND: addi a1, a1, 2047
la x11, -2049
lla x11, -2049
# CHECK-EXPAND: lui a1, 1
-# CHECK-EXPAND: addiw a1, a1, -1
+# CHECK-EXPAND: addi a1, a1, -1
la x11, 4095
lla x11, 4095
# CHECK-EXPAND: lui a1, 1048575
-# CHECK-EXPAND: addiw a1, a1, 1
+# CHECK-EXPAND: addi a1, a1, 1
la x11, -4095
lla x11, -4095
# CHECK-EXPAND: lui a2, 1
@@ -277,7 +277,7 @@ lla x12, 4096
la x12, -4096
lla x12, -4096
# CHECK-EXPAND: lui a2, 1
-# CHECK-EXPAND: addiw a2, a2, 1
+# CHECK-EXPAND: addi a2, a2, 1
la x12, 4097
lla x12, 4097
# CHECK-EXPAND: lui a2, 1048575
@@ -289,7 +289,7 @@ lla x12, -4097
la x12, 2147483647
lla x12, 2147483647
# CHECK-EXPAND: lui a2, 524288
-# CHECK-EXPAND: addiw a2, a2, 1
+# CHECK-EXPAND: addi a2, a2, 1
la x12, -2147483647
lla x12, -2147483647
# CHECK-EXPAND: lui a2, 524288
@@ -331,7 +331,7 @@ lla t1, 0x8000000000000000
la t1, -0x8000000000000000
lla t1, -0x8000000000000000
# CHECK-EXPAND: lui t2, 9321
-# CHECK-EXPAND-NEXT: addiw t2, t2, -1329
+# CHECK-EXPAND-NEXT: addi t2, t2, -1329
# CHECK-EXPAND-NEXT: slli t2, t2, 35
la t2, 0x1234567800000000
lla t2, 0x1234567800000000
@@ -348,7 +348,7 @@ lla t2, 0x1234567800000000
la t3, 0x700000000B00000F
lla t3, 0x700000000B00000F
# CHECK-EXPAND: lui t4, 583
-# CHECK-EXPAND-NEXT: addiw t4, t4, -1875
+# CHECK-EXPAND-NEXT: addi t4, t4, -1875
# CHECK-EXPAND-NEXT: slli t4, t4, 14
# CHECK-EXPAND-NEXT: addi t4, t4, -947
# CHECK-EXPAND-NEXT: slli t4, t4, 12
@@ -407,19 +407,19 @@ la x11, 0xFFFC007FFFFFF7FF
lla x11, 0xFFFC007FFFFFF7FF
# CHECK-INST: lui a2, 349525
-# CHECK-INST-NEXT: addiw a2, a2, 1365
+# CHECK-INST-NEXT: addi a2, a2, 1365
# CHECK-INST-NEXT: slli a2, a2, 1
# CHECK-ALIAS: lui a2, 349525
-# CHECK-ALIAS-NEXT: addiw a2, a2, 1365
+# CHECK-ALIAS-NEXT: addi a2, a2, 1365
# CHECK-ALIAS-NEXT: slli a2, a2, 1
la x12, 0xaaaaaaaa
lla x12, 0xaaaaaaaa
# CHECK-INST: lui a3, 699051
-# CHECK-INST-NEXT: addiw a3, a3, -1365
+# CHECK-INST-NEXT: addi a3, a3, -1365
# CHECK-INST-NEXT: slli a3, a3, 1
# CHECK-ALIAS: lui a3, 699051
-# CHECK-ALIAS-NEXT: addiw a3, a3, -1365
+# CHECK-ALIAS-NEXT: addi a3, a3, -1365
# CHECK-ALIAS-NEXT: slli a3, a3, 1
la x13, 0xffffffff55555556
lla x13, 0xffffffff55555556
@@ -433,13 +433,13 @@ lla x5, -2147485013
.equ CONSTANT, 0x123456
# CHECK-EXPAND: lui a0, 291
-# CHECK-EXPAND: addiw a0, a0, 1110
+# CHECK-EXPAND: addi a0, a0, 1110
la a0, CONSTANT
lla a0, CONSTANT
.equ CONSTANT, 0x654321
# CHECK-EXPAND: lui a0, 1620
-# CHECK-EXPAND: addiw a0, a0, 801
+# CHECK-EXPAND: addi a0, a0, 801
la a0, CONSTANT
lla a0, CONSTANT
diff --git a/llvm/test/MC/RISCV/rv64zba-aliases-valid.s b/llvm/test/MC/RISCV/rv64zba-aliases-valid.s
index 78ae18b0eaa00..d3baab58c9f1d 100644
--- a/llvm/test/MC/RISCV/rv64zba-aliases-valid.s
+++ b/llvm/test/MC/RISCV/rv64zba-aliases-valid.s
@@ -40,71 +40,71 @@ li x5, 0xbbbbb0007bb
li x5, 0xbbbbb0000
# CHECK-S-OBJ-NOALIAS: lui t1, 611378
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, 265
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, 265
# CHECK-S-OBJ-NOALIAS-NEXT: sh1add t1, t1, t1
# CHECK-S-OBJ: lui t1, 611378
-# CHECK-S-OBJ-NEXT: addiw t1, t1, 265
+# CHECK-S-OBJ-NEXT: addi t1, t1, 265
# CHECK-S-OBJ-NEXT: sh1add t1, t1, t1
li x6, -5372288229
# CHECK-S-OBJ-NOALIAS: lui t1, 437198
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, -265
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, -265
# CHECK-S-OBJ-NOALIAS-NEXT: sh2add t1, t1, t1
# CHECK-S-OBJ: lui t1, 437198
-# CHECK-S-OBJ-NEXT: addiw t1, t1, -265
+# CHECK-S-OBJ-NEXT: addi t1, t1, -265
# CHECK-S-OBJ-NEXT: sh2add t1, t1, t1
li x6, 8953813715
# CHECK-S-OBJ-NOALIAS: lui t1, 611378
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, 265
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, 265
# CHECK-S-OBJ-NOALIAS-NEXT: sh2add t1, t1, t1
# CHECK-S-OBJ: lui t1, 611378
-# CHECK-S-OBJ-NEXT: addiw t1, t1, 265
+# CHECK-S-OBJ-NEXT: addi t1, t1, 265
# CHECK-S-OBJ-NEXT: sh2add t1, t1, t1
li x6, -8953813715
# CHECK-S-OBJ-NOALIAS: lui t1, 437198
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, -265
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, -265
# CHECK-S-OBJ-NOALIAS-NEXT: sh3add t1, t1, t1
# CHECK-S-OBJ: lui t1, 437198
-# CHECK-S-OBJ-NEXT: addiw t1, t1, -265
+# CHECK-S-OBJ-NEXT: addi t1, t1, -265
# CHECK-S-OBJ-NEXT: sh3add t1, t1, t1
li x6, 16116864687
# CHECK-S-OBJ-NOALIAS: lui t1, 611378
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, 265
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, 265
# CHECK-S-OBJ-NOALIAS-NEXT: sh3add t1, t1, t1
# CHECK-S-OBJ: lui t1, 611378
-# CHECK-S-OBJ-NEXT: addiw t1, t1, 265
+# CHECK-S-OBJ-NEXT: addi t1, t1, 265
# CHECK-S-OBJ-NEXT: sh3add t1, t1, t1
li x6, -16116864687
# CHECK-S-OBJ-NOALIAS: lui t2, 768956
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t2, t2, -1093
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t2, t2, -1093
# CHECK-S-OBJ-NOALIAS-NEXT: slli.uw t2, t2, 12
# CHECK-S-OBJ-NOALIAS-NEXT: addi t2, t2, 1911
# CHECK-S-OBJ: lui t2, 768956
-# CHECK-S-OBJ-NEXT: addiw t2, t2, -1093
+# CHECK-S-OBJ-NEXT: addi t2, t2, -1093
# CHECK-S-OBJ-NEXT: slli.uw t2, t2, 12
# CHECK-S-OBJ-NEXT: addi t2, t2, 1911
li x7, 12900936431479
# CHECK-S-OBJ-NOALIAS: lui t1, 768955
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, 273
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, 273
# CHECK-S-OBJ-NOALIAS-NEXT: slli.uw t1, t1, 12
# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, 273
# CHECK-S-OBJ: lui t1, 768955
-# CHECK-S-OBJ-NEXT: addiw t1, t1, 273
+# CHECK-S-OBJ-NEXT: addi t1, t1, 273
# CHECK-S-OBJ-NEXT: slli.uw t1, t1, 12
# CHECK-S-OBJ-NEXT: addi t1, t1, 273
li x6, 12900925247761
# CHECK-S-OBJ-NOALIAS: lui t1, 768955
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, -1365
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, -1365
# CHECK-S-OBJ-NOALIAS-NEXT: slli.uw t1, t1, 12
# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, -1366
# CHECK-S-OBJ: lui t1, 768955
-# CHECK-S-OBJ-NEXT: addiw t1, t1, -1365
+# CHECK-S-OBJ-NEXT: addi t1, t1, -1365
# CHECK-S-OBJ-NEXT: slli.uw t1, t1, 12
# CHECK-S-OBJ-NEXT: addi t1, t1, -1366
li x6, 12900918536874
diff --git a/llvm/test/MC/RISCV/rv64zbs-aliases-valid.s b/llvm/test/MC/RISCV/rv64zbs-aliases-valid.s
index 0379a06ad4c8b..133a838a96521 100644
--- a/llvm/test/MC/RISCV/rv64zbs-aliases-valid.s
+++ b/llvm/test/MC/RISCV/rv64zbs-aliases-valid.s
@@ -38,21 +38,21 @@ bext x5, x6, 8
li x5, 2147485013
# CHECK-S-OBJ-NOALIAS: lui t1, 572348
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, -1093
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, -1093
# CHECK-S-OBJ-NOALIAS-NEXT: bclri t1, t1, 44
# CHECK-S-OBJ-NOALIAS-NEXT: bclri t1, t1, 63
# CHECK-S-OBJ: lui t1, 572348
-# CHECK-S-OBJ-NEXT: addiw t1, t1, -1093
+# CHECK-S-OBJ-NEXT: addi t1, t1, -1093
# CHECK-S-OBJ-NEXT: bclri t1, t1, 44
# CHECK-S-OBJ-NEXT: bclri t1, t1, 63
li x6, 9223354442718100411
# CHECK-S-OBJ-NOALIAS: lui t1, 506812
-# CHECK-S-OBJ-NOALIAS-NEXT: addiw t1, t1, -1093
+# CHECK-S-OBJ-NOALIAS-NEXT: addi t1, t1, -1093
# CHECK-S-OBJ-NOALIAS-NEXT: bseti t1, t1, 46
# CHECK-S-OBJ-NOALIAS-NEXT: bseti t1, t1, 63
# CHECK-S-OBJ: lui t1, 506812
-# CHECK-S-OBJ-NEXT: addiw t1, t1, -1093
+# CHECK-S-OBJ-NEXT: addi t1, t1, -1093
# CHECK-S-OBJ-NEXT: bseti t1, t1, 46
# CHECK-S-OBJ-NEXT: bseti t1, t1, 63
li x6, -9223301666034697285
More information about the llvm-commits
mailing list