[llvm] [RISCV] Prefer preindexed addressing mode when XTheadMemIdx exists (PR #147921)
Craig Topper via llvm-commits
llvm-commits at lists.llvm.org
Fri Jul 11 09:02:22 PDT 2025
================
@@ -0,0 +1,62 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv64 -mattr=+m,+f,+d,+xtheadmemidx -lsr-preferred-addressing-mode=none %s -o - | FileCheck %s --check-prefixes=NONE
+; RUN: llc -mtriple=riscv64 -mattr=+m,+f,+d,+xtheadmemidx -lsr-preferred-addressing-mode=preindexed %s -o - | FileCheck %s --check-prefixes=PRE
+; RUN: llc -mtriple=riscv64 -mattr=+m,+f,+d,+xtheadmemidx -lsr-preferred-addressing-mode=postindexed %s -o - | FileCheck %s --check-prefixes=POST
+; RUN: llc -mtriple=riscv64 -mattr=+m,+f,+d,+xtheadmemidx %s -o - | FileCheck %s --check-prefixes=PRE
+
+define void @test(ptr %0) {
+; NONE-LABEL: test:
+; NONE: # %bb.0: # %entry
+; NONE-NEXT: addi a1, a0, 2047
+; NONE-NEXT: addi a1, a1, 1953
+; NONE-NEXT: .LBB0_1: # %loop
+; NONE-NEXT: # =>This Inner Loop Header: Depth=1
+; NONE-NEXT: lw a2, 0(a0)
+; NONE-NEXT: addi a2, a2, 1
+; NONE-NEXT: th.swia a2, (a0), 4, 0
+; NONE-NEXT: bne a0, a1, .LBB0_1
+; NONE-NEXT: # %bb.2: # %exit
+; NONE-NEXT: ret
+;
+; PRE-LABEL: test:
+; PRE: # %bb.0: # %entry
+; PRE-NEXT: addi a1, a0, -4
+; PRE-NEXT: addi a0, a0, 2047
+; PRE-NEXT: addi a0, a0, 1949
+; PRE-NEXT: .LBB0_1: # %loop
+; PRE-NEXT: # =>This Inner Loop Header: Depth=1
+; PRE-NEXT: th.lwib a2, (a1), 4, 0
----------------
topperc wrote:
Is this better than NONE. We move the pointer increment from the store to the load which didn't change the size of the loop body, but now we have an extra instruction in the preheader.
https://github.com/llvm/llvm-project/pull/147921
More information about the llvm-commits
mailing list