[llvm] ff47989 - [AArch64][GlobalISel] Allow anyexting loads from 32b -> 64b to be legal.
Amara Emerson via llvm-commits
llvm-commits at lists.llvm.org
Mon Jan 8 08:37:57 PST 2024
Author: Amara Emerson
Date: 2024-01-08T08:37:47-08:00
New Revision: ff47989ec238dafe4a68c6a716e8dbccc9f559f5
URL: https://github.com/llvm/llvm-project/commit/ff47989ec238dafe4a68c6a716e8dbccc9f559f5
DIFF: https://github.com/llvm/llvm-project/commit/ff47989ec238dafe4a68c6a716e8dbccc9f559f5.diff
LOG: [AArch64][GlobalISel] Allow anyexting loads from 32b -> 64b to be legal.
We can already support selection of these through imported patterns, we were
just missing the legalizer rule to allow these to be formed.
Nano size benefit overall.
Added:
Modified:
llvm/lib/Target/AArch64/GISel/AArch64LegalizerInfo.cpp
llvm/test/CodeGen/AArch64/GlobalISel/combine-ext-debugloc.mir
llvm/test/CodeGen/AArch64/GlobalISel/postlegalizercombiner-extending-loads.mir
Removed:
################################################################################
diff --git a/llvm/lib/Target/AArch64/GISel/AArch64LegalizerInfo.cpp b/llvm/lib/Target/AArch64/GISel/AArch64LegalizerInfo.cpp
index 470742cdc30e6f..b657a0954d7894 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64LegalizerInfo.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64LegalizerInfo.cpp
@@ -366,7 +366,8 @@ AArch64LegalizerInfo::AArch64LegalizerInfo(const AArch64Subtarget &ST)
{v4s32, p0, s128, 8},
{v2s64, p0, s128, 8}})
// These extends are also legal
- .legalForTypesWithMemDesc({{s32, p0, s8, 8}, {s32, p0, s16, 8}})
+ .legalForTypesWithMemDesc(
+ {{s32, p0, s8, 8}, {s32, p0, s16, 8}, {s64, p0, s32, 8}})
.widenScalarToNextPow2(0, /* MinSize = */ 8)
.lowerIfMemSizeNotByteSizePow2()
.clampScalar(0, s8, s64)
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-ext-debugloc.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-ext-debugloc.mir
index 860df510b21187..4c0e191f7196d4 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-ext-debugloc.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-ext-debugloc.mir
@@ -2,7 +2,7 @@
# Check that when we combine ZEXT/ANYEXT we assign the correct location.
# CHECK: !8 = !DILocation(line: 23, column: 5, scope: !4)
-# CHECK: G_AND %15, %16, debug-location !8
+# CHECK: G_AND %14, %15, debug-location !8
--- |
target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/postlegalizercombiner-extending-loads.mir b/llvm/test/CodeGen/AArch64/GlobalISel/postlegalizercombiner-extending-loads.mir
index db576419a7647c..7b3547159f18c2 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/postlegalizercombiner-extending-loads.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/postlegalizercombiner-extending-loads.mir
@@ -8,7 +8,7 @@
entry:
ret void
}
- define void @test_no_anyext(i8* %addr) {
+ define void @test_s32_to_s64(i8* %addr) {
entry:
ret void
}
@@ -21,9 +21,11 @@ body: |
bb.0.entry:
liveins: $x0
; CHECK-LABEL: name: test_zeroext
- ; CHECK: [[COPY:%[0-9]+]]:_(p0) = COPY $x0
- ; CHECK: [[ZEXTLOAD:%[0-9]+]]:_(s32) = G_ZEXTLOAD [[COPY]](p0) :: (load (s8) from %ir.addr)
- ; CHECK: $w0 = COPY [[ZEXTLOAD]](s32)
+ ; CHECK: liveins: $x0
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x0
+ ; CHECK-NEXT: [[ZEXTLOAD:%[0-9]+]]:_(s32) = G_ZEXTLOAD [[COPY]](p0) :: (load (s8) from %ir.addr)
+ ; CHECK-NEXT: $w0 = COPY [[ZEXTLOAD]](s32)
%0:_(p0) = COPY $x0
%1:_(s8) = G_LOAD %0 :: (load (s8) from %ir.addr)
%2:_(s32) = G_ZEXT %1
@@ -31,18 +33,17 @@ body: |
...
---
-name: test_no_anyext
+name: test_s32_to_s64
legalized: true
body: |
bb.0.entry:
liveins: $x0
- ; Check that we don't try to do an anyext combine. We don't want to do this
- ; because an anyexting load like s64 = G_LOAD %p (load 4) isn't legal.
- ; CHECK-LABEL: name: test_no_anyext
- ; CHECK: [[COPY:%[0-9]+]]:_(p0) = COPY $x0
- ; CHECK: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32) from %ir.addr)
- ; CHECK: [[ANYEXT:%[0-9]+]]:_(s64) = G_ANYEXT [[LOAD]](s32)
- ; CHECK: $x0 = COPY [[ANYEXT]](s64)
+ ; CHECK-LABEL: name: test_s32_to_s64
+ ; CHECK: liveins: $x0
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x0
+ ; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(s64) = G_LOAD [[COPY]](p0) :: (load (s32) from %ir.addr)
+ ; CHECK-NEXT: $x0 = COPY [[LOAD]](s64)
%0:_(p0) = COPY $x0
%1:_(s32) = G_LOAD %0 :: (load (s32) from %ir.addr)
%2:_(s64) = G_ANYEXT %1
More information about the llvm-commits
mailing list