[llvm] [NFC][AArch64][GlobalISel] Add test coverage for vector store legalization (PR #134904)
Tobias Stadler via llvm-commits
llvm-commits at lists.llvm.org
Tue Apr 8 11:39:45 PDT 2025
https://github.com/tobias-stadler created https://github.com/llvm/llvm-project/pull/134904
Precommit tests for vector store legalization changes. This exposes a miscompile in LegalizerHelper::reduceLoadStoreWidth for non-byte-sized vector elements, which will be fixed in a follow-up patch.
>From 25be44e9ea546b034168af2c900652e738241d7e Mon Sep 17 00:00:00 2001
From: Tobias Stadler <mail at stadler-tobias.de>
Date: Tue, 8 Apr 2025 17:54:41 +0100
Subject: [PATCH] [NFC][AArch64][GlobalISel] Add test coverage for vector store
legalization
---
.../GlobalISel/legalize-store-vector.mir | 149 ++++++++++++++++++
1 file changed, 149 insertions(+)
create mode 100644 llvm/test/CodeGen/AArch64/GlobalISel/legalize-store-vector.mir
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-store-vector.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-store-vector.mir
new file mode 100644
index 0000000000000..b91eef4df0649
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-store-vector.mir
@@ -0,0 +1,149 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -O0 -mtriple=aarch64 -verify-machineinstrs -run-pass=legalizer -global-isel-abort=1 %s -o - | FileCheck %s
+
+# FIXME: Scalarized loads for non-byte-sized vector elements overwrite the same address multiple times with partial values
+---
+name: store-narrow-non-byte-sized
+tracksRegLiveness: true
+body: |
+ bb.1:
+ liveins: $x8
+ ; CHECK-LABEL: name: store-narrow-non-byte-sized
+ ; CHECK: liveins: $x8
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x8
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 42
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 511
+ ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+ ; CHECK-NEXT: [[TRUNC:%[0-9]+]]:_(s16) = G_TRUNC [[AND]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC]](s16), [[COPY]](p0) :: (store (s16), align 16)
+ ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 1
+ ; CHECK-NEXT: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C2]](s64)
+ ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[COPY3:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND1:%[0-9]+]]:_(s32) = G_AND [[COPY2]], [[COPY3]]
+ ; CHECK-NEXT: [[TRUNC1:%[0-9]+]]:_(s16) = G_TRUNC [[AND1]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC1]](s16), [[PTR_ADD]](p0) :: (store (s16) into unknown-address + 1, align 1)
+ ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
+ ; CHECK-NEXT: [[PTR_ADD1:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C3]](s64)
+ ; CHECK-NEXT: [[COPY4:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[COPY5:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND2:%[0-9]+]]:_(s32) = G_AND [[COPY4]], [[COPY5]]
+ ; CHECK-NEXT: [[TRUNC2:%[0-9]+]]:_(s16) = G_TRUNC [[AND2]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC2]](s16), [[PTR_ADD1]](p0) :: (store (s16) into unknown-address + 2)
+ ; CHECK-NEXT: [[C4:%[0-9]+]]:_(s64) = G_CONSTANT i64 3
+ ; CHECK-NEXT: [[PTR_ADD2:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C4]](s64)
+ ; CHECK-NEXT: [[COPY6:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[COPY7:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND3:%[0-9]+]]:_(s32) = G_AND [[COPY6]], [[COPY7]]
+ ; CHECK-NEXT: [[TRUNC3:%[0-9]+]]:_(s16) = G_TRUNC [[AND3]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC3]](s16), [[PTR_ADD2]](p0) :: (store (s16) into unknown-address + 3, align 1)
+ ; CHECK-NEXT: [[C5:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+ ; CHECK-NEXT: [[PTR_ADD3:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C5]](s64)
+ ; CHECK-NEXT: [[COPY8:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[COPY9:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND4:%[0-9]+]]:_(s32) = G_AND [[COPY8]], [[COPY9]]
+ ; CHECK-NEXT: [[TRUNC4:%[0-9]+]]:_(s16) = G_TRUNC [[AND4]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC4]](s16), [[PTR_ADD3]](p0) :: (store (s16) into unknown-address + 4, align 4)
+ ; CHECK-NEXT: [[C6:%[0-9]+]]:_(s64) = G_CONSTANT i64 5
+ ; CHECK-NEXT: [[PTR_ADD4:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C6]](s64)
+ ; CHECK-NEXT: [[COPY10:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[COPY11:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND5:%[0-9]+]]:_(s32) = G_AND [[COPY10]], [[COPY11]]
+ ; CHECK-NEXT: [[TRUNC5:%[0-9]+]]:_(s16) = G_TRUNC [[AND5]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC5]](s16), [[PTR_ADD4]](p0) :: (store (s16) into unknown-address + 5, align 1)
+ ; CHECK-NEXT: [[C7:%[0-9]+]]:_(s64) = G_CONSTANT i64 6
+ ; CHECK-NEXT: [[PTR_ADD5:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C7]](s64)
+ ; CHECK-NEXT: [[COPY12:%[0-9]+]]:_(s32) = COPY [[C]](s32)
+ ; CHECK-NEXT: [[COPY13:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND6:%[0-9]+]]:_(s32) = G_AND [[COPY12]], [[COPY13]]
+ ; CHECK-NEXT: [[TRUNC6:%[0-9]+]]:_(s16) = G_TRUNC [[AND6]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC6]](s16), [[PTR_ADD5]](p0) :: (store (s16) into unknown-address + 6)
+ ; CHECK-NEXT: [[C8:%[0-9]+]]:_(s64) = G_CONSTANT i64 7
+ ; CHECK-NEXT: [[PTR_ADD6:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C8]](s64)
+ ; CHECK-NEXT: [[COPY14:%[0-9]+]]:_(s32) = COPY [[C1]](s32)
+ ; CHECK-NEXT: [[AND7:%[0-9]+]]:_(s32) = G_AND [[C]], [[COPY14]]
+ ; CHECK-NEXT: [[TRUNC7:%[0-9]+]]:_(s16) = G_TRUNC [[AND7]](s32)
+ ; CHECK-NEXT: G_STORE [[TRUNC7]](s16), [[PTR_ADD6]](p0) :: (store (s16) into unknown-address + 7, align 1)
+ ; CHECK-NEXT: RET_ReallyLR
+ %0:_(p0) = COPY $x8
+ %1:_(s9) = G_CONSTANT i9 42
+ %2:_(<8 x s9>) = G_BUILD_VECTOR %1(s9), %1(s9), %1(s9), %1(s9), %1(s9), %1(s9), %1(s9), %1(s9)
+ G_STORE %2(<8 x s9>), %0(p0) :: (store (<8 x s9>), align 16)
+ RET_ReallyLR
+...
+
+# FIXME: Vector stores only sometimes act as per-lane truncating stores, see PR#121169.
+---
+name: store-narrow-per-lane-trunc
+tracksRegLiveness: true
+body: |
+ bb.1:
+ liveins: $x8
+ ; CHECK-LABEL: name: store-narrow-per-lane-trunc
+ ; CHECK: liveins: $x8
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x8
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 42
+ ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C]](s64), [[C]](s64)
+ ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C]](s64), [[C]](s64)
+ ; CHECK-NEXT: G_STORE [[BUILD_VECTOR]](<2 x s64>), [[COPY]](p0) :: (store (<2 x s64>))
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 16
+ ; CHECK-NEXT: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C1]](s64)
+ ; CHECK-NEXT: G_STORE [[BUILD_VECTOR1]](<2 x s64>), [[PTR_ADD]](p0) :: (store (<2 x s64>) into unknown-address + 16)
+ ; CHECK-NEXT: RET_ReallyLR
+ %0:_(p0) = COPY $x8
+ %1:_(s64) = G_CONSTANT i64 42
+ %2:_(<4 x s64>) = G_BUILD_VECTOR %1(s64), %1(s64), %1(s64), %1(s64)
+ G_STORE %2(<4 x s64>), %0(p0) :: (store (<4 x s63>), align 16)
+ RET_ReallyLR
+...
+
+# FIXME: Clarify behavior of stores between scalar and vector types in documentation
+---
+name: store-narrow-vector-high-bits
+tracksRegLiveness: true
+body: |
+ bb.1:
+ liveins: $x8
+ ; CHECK-LABEL: name: store-narrow-vector-high-bits
+ ; CHECK: liveins: $x8
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x8
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 42
+ ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C]](s64), [[C]](s64)
+ ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<2 x s64>) = G_BUILD_VECTOR [[C]](s64), [[C]](s64)
+ ; CHECK-NEXT: G_STORE [[BUILD_VECTOR]](<2 x s64>), [[COPY]](p0) :: (store (<2 x s64>))
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 16
+ ; CHECK-NEXT: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C1]](s64)
+ ; CHECK-NEXT: G_STORE [[BUILD_VECTOR1]](<2 x s64>), [[PTR_ADD]](p0) :: (store (<2 x s64>) into unknown-address + 16)
+ ; CHECK-NEXT: RET_ReallyLR
+ %0:_(p0) = COPY $x8
+ %1:_(s64) = G_CONSTANT i64 42
+ %2:_(<4 x s64>) = G_BUILD_VECTOR %1(s64), %1(s64), %1(s64), %1(s64)
+ G_STORE %2(<4 x s64>), %0(p0) :: (store (s252), align 16)
+ RET_ReallyLR
+...
+---
+name: store-narrow-scalar-high-bits
+tracksRegLiveness: true
+body: |
+ bb.1:
+ liveins: $x8
+ ; CHECK-LABEL: name: store-narrow-scalar-high-bits
+ ; CHECK: liveins: $x8
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x8
+ ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 42
+ ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s64) = G_CONSTANT i64 0
+ ; CHECK-NEXT: G_STORE [[C]](s64), [[COPY]](p0) :: (store (s64), align 16)
+ ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 8
+ ; CHECK-NEXT: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C2]](s64)
+ ; CHECK-NEXT: G_STORE [[C1]](s64), [[PTR_ADD]](p0) :: (store (s64) into unknown-address + 8)
+ ; CHECK-NEXT: RET_ReallyLR
+ %0:_(p0) = COPY $x8
+ %1:_(s128) = G_CONSTANT i128 42
+ G_STORE %1(s128), %0(p0) :: (store (<2 x s63>), align 16)
+ RET_ReallyLR
+...
More information about the llvm-commits
mailing list