<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
I don't understand how two-byte alignment could possibly be *worse* than one-byte alignment. You can always just pretend the specified alignment was 1 instead of 2.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Maybe you meant to check the size of the access?<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
-Eli<br>
</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> llvm-commits <llvm-commits-bounces@lists.llvm.org> on behalf of Matt Arsenault via llvm-commits <llvm-commits@lists.llvm.org><br>
<b>Sent:</b> Tuesday, February 11, 2020 3:35 PM<br>
<b>To:</b> llvm-commits@lists.llvm.org <llvm-commits@lists.llvm.org><br>
<b>Subject:</b> [EXT] [llvm] 86f9117 - AMDGPU: Don't report 2-byte alignment as fast</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText"><br>
Author: Matt Arsenault<br>
Date: 2020-02-11T18:35:00-05:00<br>
New Revision: 86f9117d476bcef2f5e0eabae4781e99877ce7b5<br>
<br>
URL: <a href="https://github.com/llvm/llvm-project/commit/86f9117d476bcef2f5e0eabae4781e99877ce7b5">
https://github.com/llvm/llvm-project/commit/86f9117d476bcef2f5e0eabae4781e99877ce7b5</a><br>
DIFF: <a href="https://github.com/llvm/llvm-project/commit/86f9117d476bcef2f5e0eabae4781e99877ce7b5.diff">
https://github.com/llvm/llvm-project/commit/86f9117d476bcef2f5e0eabae4781e99877ce7b5.diff</a><br>
<br>
LOG: AMDGPU: Don't report 2-byte alignment as fast<br>
<br>
This is apparently worse than 1-byte alignment. This does not attempt<br>
to decompose 2-byte aligned wide stores, but will stop trying to<br>
produce them.<br>
<br>
Also fix bug in LoadStoreVectorizer which was decreasing the alignment<br>
and vectorizing stack accesses. It was assuming a stack object was an<br>
alloca that could have its base alignment changed, which is not true<br>
if the pointer is derived from a function argument.<br>
<br>
Added: <br>
llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.global.ll<br>
llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.private.ll<br>
<br>
Modified: <br>
llvm/lib/Target/AMDGPU/SIISelLowering.cpp<br>
llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp<br>
llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll<br>
llvm/test/CodeGen/AMDGPU/unaligned-load-store.ll<br>
llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/adjust-alloca-alignment.ll<br>
llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores-private.ll<br>
llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll<br>
<br>
Removed: <br>
<br>
<br>
<br>
################################################################################<br>
diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp<br>
index b6966e66c36b..55003521b8b2 100644<br>
--- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp<br>
+++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp<br>
@@ -1251,9 +1251,11 @@ bool SITargetLowering::allowsMisalignedMemoryAccessesImpl(<br>
// If we have an uniform constant load, it still requires using a slow<br>
// buffer instruction if unaligned.<br>
if (IsFast) {<br>
+ // Accesses can really be issued as 1-byte aligned or 4-byte aligned, so<br>
+ // 2-byte alignment is worse than 1 unless doing a 2-byte accesss.<br>
*IsFast = (AddrSpace == AMDGPUAS::CONSTANT_ADDRESS ||<br>
AddrSpace == AMDGPUAS::CONSTANT_ADDRESS_32BIT) ?<br>
- (Align % 4 == 0) : true;<br>
+ Align >= 4 : Align != 2;<br>
}<br>
<br>
return true;<br>
<br>
diff --git a/llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp b/llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp<br>
index 3b22f3082c33..8ab03c34335d 100644<br>
--- a/llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp<br>
+++ b/llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp<br>
@@ -1028,8 +1028,10 @@ bool Vectorizer::vectorizeStoreChain(<br>
unsigned NewAlign = getOrEnforceKnownAlignment(S0->getPointerOperand(),<br>
StackAdjustedAlignment,<br>
DL, S0, nullptr, &DT);<br>
- if (NewAlign != 0)<br>
+ if (NewAlign >= Alignment.value())<br>
Alignment = Align(NewAlign);<br>
+ else<br>
+ return false;<br>
}<br>
<br>
if (!TTI.isLegalToVectorizeStoreChain(SzInBytes, Alignment.value(), AS)) {<br>
@@ -1168,8 +1170,12 @@ bool Vectorizer::vectorizeLoadChain(<br>
vectorizeLoadChain(Chains.second, InstructionsProcessed);<br>
}<br>
<br>
- Alignment = getOrEnforceKnownAlignment(<br>
- L0->getPointerOperand(), StackAdjustedAlignment, DL, L0, nullptr, &DT);<br>
+ unsigned NewAlign = getOrEnforceKnownAlignment(<br>
+ L0->getPointerOperand(), StackAdjustedAlignment, DL, L0, nullptr, &DT);<br>
+ if (NewAlign >= Alignment)<br>
+ Alignment = NewAlign;<br>
+ else<br>
+ return false;<br>
}<br>
<br>
if (!TTI.isLegalToVectorizeLoadChain(SzInBytes, Alignment, AS)) {<br>
<br>
diff --git a/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll b/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll<br>
index 9ec8b7573ceb..0df32537808a 100644<br>
--- a/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll<br>
+++ b/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll<br>
@@ -199,14 +199,17 @@ define amdgpu_kernel void @vload2_private(i16 addrspace(1)* nocapture readonly %<br>
; GCN-NEXT: s_waitcnt lgkmcnt(0)<br>
; GCN-NEXT: v_mov_b32_e32 v2, s4<br>
; GCN-NEXT: v_mov_b32_e32 v3, s5<br>
-; GCN-NEXT: global_load_ushort v4, v[2:3], off offset:4<br>
-; GCN-NEXT: global_load_dword v2, v[2:3], off<br>
+; GCN-NEXT: global_load_ushort v4, v[2:3], off<br>
; GCN-NEXT: v_mov_b32_e32 v0, s6<br>
; GCN-NEXT: v_mov_b32_e32 v1, s7<br>
; GCN-NEXT: s_waitcnt vmcnt(0)<br>
-; GCN-NEXT: buffer_store_short v2, off, s[0:3], s9 offset:4<br>
-; GCN-NEXT: buffer_store_short_d16_hi v2, off, s[0:3], s9 offset:6<br>
-; GCN-NEXT: buffer_store_short v4, off, s[0:3], s9 offset:8<br>
+; GCN-NEXT: buffer_store_short v4, off, s[0:3], s9 offset:4<br>
+; GCN-NEXT: global_load_ushort v4, v[2:3], off offset:2<br>
+; GCN-NEXT: s_waitcnt vmcnt(0)<br>
+; GCN-NEXT: buffer_store_short v4, off, s[0:3], s9 offset:6<br>
+; GCN-NEXT: global_load_ushort v2, v[2:3], off offset:4<br>
+; GCN-NEXT: s_waitcnt vmcnt(0)<br>
+; GCN-NEXT: buffer_store_short v2, off, s[0:3], s9 offset:8<br>
; GCN-NEXT: buffer_load_ushort v2, off, s[0:3], s9 offset:4<br>
; GCN-NEXT: buffer_load_ushort v4, off, s[0:3], s9 offset:6<br>
; GCN-NEXT: s_waitcnt vmcnt(1)<br>
<br>
diff --git a/llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.global.ll b/llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.global.ll<br>
new file mode 100644<br>
index 000000000000..34f8706ac66c<br>
--- /dev/null<br>
+++ b/llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.global.ll<br>
@@ -0,0 +1,328 @@<br>
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py<br>
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=hawaii -mattr=-unaligned-buffer-access < %s | FileCheck -check-prefixes=GCN,GFX7-ALIGNED %s<br>
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=hawaii -mattr=+unaligned-buffer-access < %s | FileCheck -check-prefixes=GCN,GFX7-UNALIGNED %s<br>
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -mattr=+unaligned-buffer-access < %s | FileCheck -check-prefixes=GCN,GFX9 %s<br>
+<br>
+; Should not merge this to a dword load<br>
+define i32 @global_load_2xi16_align2(i16 addrspace(1)* %p) #0 {<br>
+; GFX7-ALIGNED-LABEL: global_load_2xi16_align2:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v0<br>
+; GFX7-ALIGNED-NEXT: v_addc_u32_e32 v3, vcc, 0, v1, vcc<br>
+; GFX7-ALIGNED-NEXT: flat_load_ushort v0, v[0:1]<br>
+; GFX7-ALIGNED-NEXT: flat_load_ushort v1, v[2:3]<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v1, 16, v1<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v0, v0, v1<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: global_load_2xi16_align2:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v0<br>
+; GFX7-UNALIGNED-NEXT: v_addc_u32_e32 v3, vcc, 0, v1, vcc<br>
+; GFX7-UNALIGNED-NEXT: flat_load_ushort v0, v[0:1]<br>
+; GFX7-UNALIGNED-NEXT: flat_load_ushort v1, v[2:3]<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_lshlrev_b32_e32 v1, 16, v1<br>
+; GFX7-UNALIGNED-NEXT: v_or_b32_e32 v0, v0, v1<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: global_load_2xi16_align2:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: global_load_ushort v2, v[0:1], off<br>
+; GFX9-NEXT: global_load_ushort v0, v[0:1], off offset:2<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: v_lshl_or_b32 v0, v0, 16, v2<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.p = getelementptr i16, i16 addrspace(1)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(1)* %p, align 2<br>
+ %p.1 = load i16, i16 addrspace(1)* %gep.p, align 2<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; Should not merge this to a dword store<br>
+define amdgpu_kernel void @global_store_2xi16_align2(i16 addrspace(1)* %p, i16 addrspace(1)* %r) #0 {<br>
+; GFX7-ALIGNED-LABEL: global_store_2xi16_align2:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v2, 1<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-ALIGNED-NEXT: s_add_u32 s2, s0, 2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-ALIGNED-NEXT: flat_store_short v[0:1], v2<br>
+; GFX7-ALIGNED-NEXT: s_addc_u32 s3, s1, 0<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, s2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v2, 2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v1, s3<br>
+; GFX7-ALIGNED-NEXT: flat_store_short v[0:1], v2<br>
+; GFX7-ALIGNED-NEXT: s_endpgm<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: global_store_2xi16_align2:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v2, 1<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-UNALIGNED-NEXT: s_add_u32 s2, s0, 2<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-UNALIGNED-NEXT: flat_store_short v[0:1], v2<br>
+; GFX7-UNALIGNED-NEXT: s_addc_u32 s3, s1, 0<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, s2<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v2, 2<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v1, s3<br>
+; GFX7-UNALIGNED-NEXT: flat_store_short v[0:1], v2<br>
+; GFX7-UNALIGNED-NEXT: s_endpgm<br>
+;<br>
+; GFX9-LABEL: global_store_2xi16_align2:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x8<br>
+; GFX9-NEXT: v_mov_b32_e32 v2, 1<br>
+; GFX9-NEXT: v_mov_b32_e32 v3, 2<br>
+; GFX9-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX9-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX9-NEXT: global_store_short v[0:1], v2, off<br>
+; GFX9-NEXT: global_store_short v[0:1], v3, off offset:2<br>
+; GFX9-NEXT: s_endpgm<br>
+ %gep.r = getelementptr i16, i16 addrspace(1)* %r, i64 1<br>
+ store i16 1, i16 addrspace(1)* %r, align 2<br>
+ store i16 2, i16 addrspace(1)* %gep.r, align 2<br>
+ ret void<br>
+}<br>
+<br>
+; Should produce align 1 dword when legal<br>
+define i32 @global_load_2xi16_align1(i16 addrspace(1)* %p) #0 {<br>
+; GFX7-ALIGNED-LABEL: global_load_2xi16_align1:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v0<br>
+; GFX7-ALIGNED-NEXT: v_addc_u32_e32 v3, vcc, 0, v1, vcc<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v4, vcc, 1, v0<br>
+; GFX7-ALIGNED-NEXT: v_addc_u32_e32 v5, vcc, 0, v1, vcc<br>
+; GFX7-ALIGNED-NEXT: flat_load_ubyte v6, v[0:1]<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v0, vcc, 3, v0<br>
+; GFX7-ALIGNED-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc<br>
+; GFX7-ALIGNED-NEXT: flat_load_ubyte v2, v[2:3]<br>
+; GFX7-ALIGNED-NEXT: flat_load_ubyte v3, v[4:5]<br>
+; GFX7-ALIGNED-NEXT: flat_load_ubyte v0, v[0:1]<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(1) lgkmcnt(1)<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v1, 8, v3<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v0, 8, v0<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v0, v0, v2<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v1, v1, v6<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v0, 16, v0<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v0, v1, v0<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: global_load_2xi16_align1:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: flat_load_dword v0, v[0:1]<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: global_load_2xi16_align1:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: global_load_dword v0, v[0:1], off<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, 0xffff<br>
+; GFX9-NEXT: s_mov_b32 s4, 0xffff<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: v_bfi_b32 v1, v1, 0, v0<br>
+; GFX9-NEXT: v_and_or_b32 v0, v0, s4, v1<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.p = getelementptr i16, i16 addrspace(1)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(1)* %p, align 1<br>
+ %p.1 = load i16, i16 addrspace(1)* %gep.p, align 1<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; Should produce align 1 dword when legal<br>
+define amdgpu_kernel void @global_store_2xi16_align1(i16 addrspace(1)* %p, i16 addrspace(1)* %r) #0 {<br>
+; GFX7-ALIGNED-LABEL: global_store_2xi16_align1:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v4, 1<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v5, 0<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: s_add_u32 s2, s0, 2<br>
+; GFX7-ALIGNED-NEXT: s_addc_u32 s3, s1, 0<br>
+; GFX7-ALIGNED-NEXT: s_add_u32 s4, s0, 1<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-ALIGNED-NEXT: s_addc_u32 s5, s1, 0<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-ALIGNED-NEXT: s_add_u32 s0, s0, 3<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v2, s4<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v3, s5<br>
+; GFX7-ALIGNED-NEXT: flat_store_byte v[0:1], v4<br>
+; GFX7-ALIGNED-NEXT: flat_store_byte v[2:3], v5<br>
+; GFX7-ALIGNED-NEXT: s_addc_u32 s1, s1, 0<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v2, s2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v4, 2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v3, s3<br>
+; GFX7-ALIGNED-NEXT: flat_store_byte v[0:1], v5<br>
+; GFX7-ALIGNED-NEXT: flat_store_byte v[2:3], v4<br>
+; GFX7-ALIGNED-NEXT: s_endpgm<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: global_store_2xi16_align1:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-UNALIGNED-NEXT: flat_store_dword v[0:1], v2<br>
+; GFX7-UNALIGNED-NEXT: s_endpgm<br>
+;<br>
+; GFX9-LABEL: global_store_2xi16_align1:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x8<br>
+; GFX9-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX9-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX9-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX9-NEXT: global_store_dword v[0:1], v2, off<br>
+; GFX9-NEXT: s_endpgm<br>
+ %gep.r = getelementptr i16, i16 addrspace(1)* %r, i64 1<br>
+ store i16 1, i16 addrspace(1)* %r, align 1<br>
+ store i16 2, i16 addrspace(1)* %gep.r, align 1<br>
+ ret void<br>
+}<br>
+<br>
+; Should merge this to a dword load<br>
+define i32 @global_load_2xi16_align4(i16 addrspace(1)* %p) #0 {<br>
+; GFX7-LABEL: load_2xi16_align4:<br>
+; GFX7: ; %bb.0:<br>
+; GFX7-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-NEXT: flat_load_dword v0, v[0:1]<br>
+; GFX7-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-ALIGNED-LABEL: global_load_2xi16_align4:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: flat_load_dword v0, v[0:1]<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: global_load_2xi16_align4:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: flat_load_dword v0, v[0:1]<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: global_load_2xi16_align4:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: global_load_dword v0, v[0:1], off<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, 0xffff<br>
+; GFX9-NEXT: s_mov_b32 s4, 0xffff<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: v_bfi_b32 v1, v1, 0, v0<br>
+; GFX9-NEXT: v_and_or_b32 v0, v0, s4, v1<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.p = getelementptr i16, i16 addrspace(1)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(1)* %p, align 4<br>
+ %p.1 = load i16, i16 addrspace(1)* %gep.p, align 2<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; Should merge this to a dword store<br>
+define amdgpu_kernel void @global_store_2xi16_align4(i16 addrspace(1)* %p, i16 addrspace(1)* %r) #0 {<br>
+; GFX7-LABEL: global_store_2xi16_align4:<br>
+; GFX7: ; %bb.0:<br>
+; GFX7-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX7-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-NEXT: flat_store_dword v[0:1], v2<br>
+; GFX7-NEXT: s_endpgm<br>
+;<br>
+; GFX7-ALIGNED-LABEL: global_store_2xi16_align4:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-ALIGNED-NEXT: flat_store_dword v[0:1], v2<br>
+; GFX7-ALIGNED-NEXT: s_endpgm<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: global_store_2xi16_align4:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-UNALIGNED-NEXT: flat_store_dword v[0:1], v2<br>
+; GFX7-UNALIGNED-NEXT: s_endpgm<br>
+;<br>
+; GFX9-LABEL: global_store_2xi16_align4:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x8<br>
+; GFX9-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX9-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX9-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX9-NEXT: global_store_dword v[0:1], v2, off<br>
+; GFX9-NEXT: s_endpgm<br>
+ %gep.r = getelementptr i16, i16 addrspace(1)* %r, i64 1<br>
+ store i16 1, i16 addrspace(1)* %r, align 4<br>
+ store i16 2, i16 addrspace(1)* %gep.r, align 2<br>
+ ret void<br>
+}<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
+<br>
<br>
diff --git a/llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.private.ll b/llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.private.ll<br>
new file mode 100644<br>
index 000000000000..0053d2f3019d<br>
--- /dev/null<br>
+++ b/llvm/test/CodeGen/AMDGPU/fast-unaligned-load-store.private.ll<br>
@@ -0,0 +1,245 @@<br>
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py<br>
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=hawaii -mattr=-unaligned-scratch-access < %s | FileCheck -check-prefixes=GCN,GFX7-ALIGNED %s<br>
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=hawaii -mattr=+unaligned-scratch-access < %s | FileCheck -check-prefixes=GCN,GFX7-UNALIGNED %s<br>
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -mattr=+unaligned-scratch-access < %s | FileCheck -check-prefixes=GCN,GFX9 %s<br>
+<br>
+; Should not merge this to a dword load<br>
+define i32 @private_load_2xi16_align2(i16 addrspace(5)* %p) #0 {<br>
+; GFX7-ALIGNED-LABEL: private_load_2xi16_align2:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v1, vcc, 2, v0<br>
+; GFX7-ALIGNED-NEXT: buffer_load_ushort v1, v1, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_load_ushort v0, v0, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(1)<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v1, 16, v1<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v0, v0, v1<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: private_load_2xi16_align2:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_add_i32_e32 v1, vcc, 2, v0<br>
+; GFX7-UNALIGNED-NEXT: buffer_load_ushort v1, v1, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: buffer_load_ushort v0, v0, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(1)<br>
+; GFX7-UNALIGNED-NEXT: v_lshlrev_b32_e32 v1, 16, v1<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_or_b32_e32 v0, v0, v1<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: private_load_2xi16_align2:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: buffer_load_ushort v1, v0, s[0:3], s33 offen<br>
+; GFX9-NEXT: buffer_load_ushort v0, v0, s[0:3], s33 offen offset:2<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: v_lshl_or_b32 v0, v0, 16, v1<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.p = getelementptr i16, i16 addrspace(5)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(5)* %p, align 2<br>
+ %p.1 = load i16, i16 addrspace(5)* %gep.p, align 2<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; Should not merge this to a dword store<br>
+define void @private_store_2xi16_align2(i16 addrspace(5)* %p, i16 addrspace(5)* %r) #0 {<br>
+; GFX7-ALIGNED-LABEL: private_store_2xi16_align2:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v3, 1<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, 2<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v1<br>
+; GFX7-ALIGNED-NEXT: buffer_store_short v3, v1, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_store_short v0, v2, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: private_store_2xi16_align2:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v3, 1<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, 2<br>
+; GFX7-UNALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v1<br>
+; GFX7-UNALIGNED-NEXT: buffer_store_short v3, v1, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: buffer_store_short v0, v2, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: private_store_2xi16_align2:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: v_mov_b32_e32 v0, 1<br>
+; GFX9-NEXT: buffer_store_short v0, v1, s[0:3], s33 offen<br>
+; GFX9-NEXT: v_mov_b32_e32 v0, 2<br>
+; GFX9-NEXT: buffer_store_short v0, v1, s[0:3], s33 offen offset:2<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.r = getelementptr i16, i16 addrspace(5)* %r, i64 1<br>
+ store i16 1, i16 addrspace(5)* %r, align 2<br>
+ store i16 2, i16 addrspace(5)* %gep.r, align 2<br>
+ ret void<br>
+}<br>
+<br>
+; Should produce align 1 dword when legal<br>
+define i32 @private_load_2xi16_align1(i16 addrspace(5)* %p) #0 {<br>
+; GFX7-ALIGNED-LABEL: private_load_2xi16_align1:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v1, vcc, 3, v0<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v0<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v3, vcc, 1, v0<br>
+; GFX7-ALIGNED-NEXT: buffer_load_ubyte v1, v1, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_load_ubyte v3, v3, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_load_ubyte v2, v2, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_load_ubyte v0, v0, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(3)<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v1, 8, v1<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(2)<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v3, 8, v3<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(1)<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v1, v1, v2<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v0, v3, v0<br>
+; GFX7-ALIGNED-NEXT: v_lshlrev_b32_e32 v1, 16, v1<br>
+; GFX7-ALIGNED-NEXT: v_or_b32_e32 v0, v0, v1<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: private_load_2xi16_align1:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: buffer_load_dword v0, v0, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: private_load_2xi16_align1:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: buffer_load_dword v0, v0, s[0:3], s33 offen<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, 0xffff<br>
+; GFX9-NEXT: s_mov_b32 s4, 0xffff<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: v_bfi_b32 v1, v1, 0, v0<br>
+; GFX9-NEXT: v_and_or_b32 v0, v0, s4, v1<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.p = getelementptr i16, i16 addrspace(5)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(5)* %p, align 1<br>
+ %p.1 = load i16, i16 addrspace(5)* %gep.p, align 1<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; Should produce align 1 dword when legal<br>
+define void @private_store_2xi16_align1(i16 addrspace(5)* %p, i16 addrspace(5)* %r) #0 {<br>
+; GFX7-ALIGNED-LABEL: private_store_2xi16_align1:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v3, 1<br>
+; GFX7-ALIGNED-NEXT: buffer_store_byte v3, v1, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v2, vcc, 2, v1<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v3, vcc, 1, v1<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v4, 0<br>
+; GFX7-ALIGNED-NEXT: v_add_i32_e32 v1, vcc, 3, v1<br>
+; GFX7-ALIGNED-NEXT: v_mov_b32_e32 v0, 2<br>
+; GFX7-ALIGNED-NEXT: buffer_store_byte v4, v3, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_store_byte v4, v1, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: buffer_store_byte v0, v2, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: private_store_2xi16_align1:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: v_mov_b32_e32 v0, 0x20001<br>
+; GFX7-UNALIGNED-NEXT: buffer_store_dword v0, v1, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: private_store_2xi16_align1:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: v_mov_b32_e32 v0, 0x20001<br>
+; GFX9-NEXT: buffer_store_dword v0, v1, s[0:3], s33 offen<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.r = getelementptr i16, i16 addrspace(5)* %r, i64 1<br>
+ store i16 1, i16 addrspace(5)* %r, align 1<br>
+ store i16 2, i16 addrspace(5)* %gep.r, align 1<br>
+ ret void<br>
+}<br>
+<br>
+; Should merge this to a dword load<br>
+define i32 @private_load_2xi16_align4(i16 addrspace(5)* %p) #0 {<br>
+; GFX7-LABEL: load_2xi16_align4:<br>
+; GFX7: ; %bb.0:<br>
+; GFX7-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-NEXT: flat_load_dword v0, v[0:1]<br>
+; GFX7-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)<br>
+; GFX7-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-ALIGNED-LABEL: private_load_2xi16_align4:<br>
+; GFX7-ALIGNED: ; %bb.0:<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: buffer_load_dword v0, v0, s[0:3], s33 offen<br>
+; GFX7-ALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-ALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX7-UNALIGNED-LABEL: private_load_2xi16_align4:<br>
+; GFX7-UNALIGNED: ; %bb.0:<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: buffer_load_dword v0, v0, s[0:3], s33 offen<br>
+; GFX7-UNALIGNED-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX7-UNALIGNED-NEXT: s_setpc_b64 s[30:31]<br>
+;<br>
+; GFX9-LABEL: private_load_2xi16_align4:<br>
+; GFX9: ; %bb.0:<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GFX9-NEXT: buffer_load_dword v0, v0, s[0:3], s33 offen<br>
+; GFX9-NEXT: v_mov_b32_e32 v1, 0xffff<br>
+; GFX9-NEXT: s_mov_b32 s4, 0xffff<br>
+; GFX9-NEXT: s_waitcnt vmcnt(0)<br>
+; GFX9-NEXT: v_bfi_b32 v1, v1, 0, v0<br>
+; GFX9-NEXT: v_and_or_b32 v0, v0, s4, v1<br>
+; GFX9-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.p = getelementptr i16, i16 addrspace(5)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(5)* %p, align 4<br>
+ %p.1 = load i16, i16 addrspace(5)* %gep.p, align 2<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; Should merge this to a dword store<br>
+define void @private_store_2xi16_align4(i16 addrspace(5)* %p, i16 addrspace(5)* %r) #0 {<br>
+; GFX7-LABEL: private_store_2xi16_align4:<br>
+; GFX7: ; %bb.0:<br>
+; GFX7-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2<br>
+; GFX7-NEXT: v_mov_b32_e32 v2, 0x20001<br>
+; GFX7-NEXT: s_waitcnt lgkmcnt(0)<br>
+; GFX7-NEXT: v_mov_b32_e32 v0, s0<br>
+; GFX7-NEXT: v_mov_b32_e32 v1, s1<br>
+; GFX7-NEXT: flat_store_dword v[0:1], v2<br>
+; GFX7-NEXT: s_endpgm<br>
+;<br>
+; GCN-LABEL: private_store_2xi16_align4:<br>
+; GCN: ; %bb.0:<br>
+; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)<br>
+; GCN-NEXT: v_mov_b32_e32 v0, 0x20001<br>
+; GCN-NEXT: buffer_store_dword v0, v1, s[0:3], s33 offen<br>
+; GCN-NEXT: s_waitcnt vmcnt(0)<br>
+; GCN-NEXT: s_setpc_b64 s[30:31]<br>
+ %gep.r = getelementptr i16, i16 addrspace(5)* %r, i64 1<br>
+ store i16 1, i16 addrspace(5)* %r, align 4<br>
+ store i16 2, i16 addrspace(5)* %gep.r, align 2<br>
+ ret void<br>
+}<br>
<br>
diff --git a/llvm/test/CodeGen/AMDGPU/unaligned-load-store.ll b/llvm/test/CodeGen/AMDGPU/unaligned-load-store.ll<br>
index 9bcf35e13a1b..020f677ee3cf 100644<br>
--- a/llvm/test/CodeGen/AMDGPU/unaligned-load-store.ll<br>
+++ b/llvm/test/CodeGen/AMDGPU/unaligned-load-store.ll<br>
@@ -665,4 +665,25 @@ define void @private_store_align2_f64(double addrspace(5)* %out, double %x) #0 {<br>
ret void<br>
}<br>
<br>
+; Should not merge this to a dword store<br>
+define amdgpu_kernel void @global_store_2xi16_align2(i16 addrspace(1)* %p, i16 addrspace(1)* %r) #0 {<br>
+ %gep.r = getelementptr i16, i16 addrspace(1)* %r, i64 1<br>
+ %v = load i16, i16 addrspace(1)* %p, align 2<br>
+ store i16 1, i16 addrspace(1)* %r, align 2<br>
+ store i16 2, i16 addrspace(1)* %gep.r, align 2<br>
+ ret void<br>
+}<br>
+<br>
+; Should not merge this to a word load<br>
+define i32 @load_2xi16_align2(i16 addrspace(1)* %p) #0 {<br>
+ %gep.p = getelementptr i16, i16 addrspace(1)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(1)* %p, align 2<br>
+ %p.1 = load i16, i16 addrspace(1)* %gep.p, align 2<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
attributes #0 = { nounwind }<br>
<br>
diff --git a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/adjust-alloca-alignment.ll b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/adjust-alloca-alignment.ll<br>
index b0dd5d185c77..9f85fec33ba1 100644<br>
--- a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/adjust-alloca-alignment.ll<br>
+++ b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/adjust-alloca-alignment.ll<br>
@@ -207,4 +207,55 @@ define amdgpu_kernel void @merge_private_load_4_vector_elts_loads_v4i8() {<br>
ret void<br>
}<br>
<br>
+; Make sure we don't think the alignment will increase if the base address isn't an alloca<br>
+; ALL-LABEL: @private_store_2xi16_align2_not_alloca(<br>
+; ALL: store i16<br>
+; ALL: store i16<br>
+define void @private_store_2xi16_align2_not_alloca(i16 addrspace(5)* %p, i16 addrspace(5)* %r) #0 {<br>
+ %gep.r = getelementptr i16, i16 addrspace(5)* %r, i32 1<br>
+ store i16 1, i16 addrspace(5)* %r, align 2<br>
+ store i16 2, i16 addrspace(5)* %gep.r, align 2<br>
+ ret void<br>
+}<br>
+<br>
+; ALL-LABEL: @private_store_2xi16_align1_not_alloca(<br>
+; ALIGNED: store i16<br>
+; ALIGNED: store i16<br>
+; UNALIGNED: store <2 x i16><br>
+define void @private_store_2xi16_align1_not_alloca(i16 addrspace(5)* %p, i16 addrspace(5)* %r) #0 {<br>
+ %gep.r = getelementptr i16, i16 addrspace(5)* %r, i32 1<br>
+ store i16 1, i16 addrspace(5)* %r, align 1<br>
+ store i16 2, i16 addrspace(5)* %gep.r, align 1<br>
+ ret void<br>
+}<br>
+<br>
+; ALL-LABEL: @private_load_2xi16_align2_not_alloca(<br>
+; ALL: load i16<br>
+; ALL: load i16<br>
+define i32 @private_load_2xi16_align2_not_alloca(i16 addrspace(5)* %p) #0 {<br>
+ %gep.p = getelementptr i16, i16 addrspace(5)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(5)* %p, align 2<br>
+ %p.1 = load i16, i16 addrspace(5)* %gep.p, align 2<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
+; ALL-LABEL: @private_load_2xi16_align1_not_alloca(<br>
+; ALIGNED: load i16<br>
+; ALIGNED: load i16<br>
+; UNALIGNED: load <2 x i16><br>
+define i32 @private_load_2xi16_align1_not_alloca(i16 addrspace(5)* %p) #0 {<br>
+ %gep.p = getelementptr i16, i16 addrspace(5)* %p, i64 1<br>
+ %p.0 = load i16, i16 addrspace(5)* %p, align 1<br>
+ %p.1 = load i16, i16 addrspace(5)* %gep.p, align 1<br>
+ %zext.0 = zext i16 %p.0 to i32<br>
+ %zext.1 = zext i16 %p.1 to i32<br>
+ %shl.1 = shl i32 %zext.1, 16<br>
+ %or = or i32 %zext.0, %shl.1<br>
+ ret i32 %or<br>
+}<br>
+<br>
attributes #0 = { nounwind }<br>
<br>
diff --git a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores-private.ll b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores-private.ll<br>
index 4292cbcec850..31a1c270bd0e 100644<br>
--- a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores-private.ll<br>
+++ b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores-private.ll<br>
@@ -57,20 +57,10 @@ define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v4i32_align1(<br>
}<br>
<br>
; ALL-LABEL: @merge_private_store_4_vector_elts_loads_v4i32_align2(<br>
-; ALIGNED: store i32 9, i32 addrspace(5)* %out, align 2<br>
-; ALIGNED: store i32 1, i32 addrspace(5)* %out.gep.1, align 2<br>
-; ALIGNED: store i32 23, i32 addrspace(5)* %out.gep.2, align 2<br>
-; ALIGNED: store i32 19, i32 addrspace(5)* %out.gep.3, align 2<br>
-<br>
-; ELT16-UNALIGNED: store <4 x i32> <i32 9, i32 1, i32 23, i32 19>, <4 x i32> addrspace(5)* %1, align 2<br>
-<br>
-; ELT8-UNALIGNED: store <2 x i32><br>
-; ELT8-UNALIGNED: store <2 x i32><br>
-<br>
-; ELT4-UNALIGNED: store i32<br>
-; ELT4-UNALIGNED: store i32<br>
-; ELT4-UNALIGNED: store i32<br>
-; ELT4-UNALIGNED: store i32<br>
+; ALL: store i32<br>
+; ALL: store i32<br>
+; ALL: store i32<br>
+; ALL: store i32<br>
define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v4i32_align2(i32 addrspace(5)* %out) #0 {<br>
%out.gep.1 = getelementptr i32, i32 addrspace(5)* %out, i32 1<br>
%out.gep.2 = getelementptr i32, i32 addrspace(5)* %out, i32 2<br>
@@ -127,10 +117,8 @@ define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v2i16(i16 add<br>
}<br>
<br>
; ALL-LABEL: @merge_private_store_4_vector_elts_loads_v2i16_align2(<br>
-; ALIGNED: store i16<br>
-; ALIGNED: store i16<br>
-<br>
-; UNALIGNED: store <2 x i16> <i16 9, i16 12>, <2 x i16> addrspace(5)* %1, align 2<br>
+; ALL: store i16<br>
+; ALL: store i16<br>
define amdgpu_kernel void @merge_private_store_4_vector_elts_loads_v2i16_align2(i16 addrspace(5)* %out) #0 {<br>
%out.gep.1 = getelementptr i16, i16 addrspace(5)* %out, i32 1<br>
<br>
<br>
diff --git a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll<br>
index 0d9a4184e718..8302ad9562f5 100644<br>
--- a/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll<br>
+++ b/llvm/test/Transforms/LoadStoreVectorizer/AMDGPU/merge-stores.ll<br>
@@ -49,7 +49,8 @@ define amdgpu_kernel void @merge_global_store_2_constants_0_i16(i16 addrspace(1)<br>
}<br>
<br>
; CHECK-LABEL: @merge_global_store_2_constants_i16_natural_align<br>
-; CHECK: store <2 x i16><br>
+; CHECK: store i16<br>
+; CHECK: store i16<br>
define amdgpu_kernel void @merge_global_store_2_constants_i16_natural_align(i16 addrspace(1)* %out) #0 {<br>
%out.gep.1 = getelementptr i16, i16 addrspace(1)* %out, i32 1<br>
<br>
@@ -58,8 +59,19 @@ define amdgpu_kernel void @merge_global_store_2_constants_i16_natural_align(i16<br>
ret void<br>
}<br>
<br>
+; CHECK-LABEL: @merge_global_store_2_constants_i16_align_1<br>
+; CHECK: store <2 x i16><br>
+define amdgpu_kernel void @merge_global_store_2_constants_i16_align_1(i16 addrspace(1)* %out) #0 {<br>
+ %out.gep.1 = getelementptr i16, i16 addrspace(1)* %out, i32 1<br>
+<br>
+ store i16 123, i16 addrspace(1)* %out.gep.1, align 1<br>
+ store i16 456, i16 addrspace(1)* %out, align 1<br>
+ ret void<br>
+}<br>
+<br>
; CHECK-LABEL: @merge_global_store_2_constants_half_natural_align<br>
-; CHECK: store <2 x half><br>
+; CHECK: store half<br>
+; CHECK: store half<br>
define amdgpu_kernel void @merge_global_store_2_constants_half_natural_align(half addrspace(1)* %out) #0 {<br>
%out.gep.1 = getelementptr half, half addrspace(1)* %out, i32 1<br>
<br>
@@ -68,6 +80,16 @@ define amdgpu_kernel void @merge_global_store_2_constants_half_natural_align(hal<br>
ret void<br>
}<br>
<br>
+; CHECK-LABEL: @merge_global_store_2_constants_half_align_1<br>
+; CHECK: store <2 x half><br>
+define amdgpu_kernel void @merge_global_store_2_constants_half_align_1(half addrspace(1)* %out) #0 {<br>
+ %out.gep.1 = getelementptr half, half addrspace(1)* %out, i32 1<br>
+<br>
+ store half 2.0, half addrspace(1)* %out.gep.1, align 1<br>
+ store half 1.0, half addrspace(1)* %out, align 1<br>
+ ret void<br>
+}<br>
+<br>
; CHECK-LABEL: @merge_global_store_2_constants_i32<br>
; CHECK: store <2 x i32> <i32 456, i32 123>, <2 x i32> addrspace(1)* %{{[0-9]+}}, align 4<br>
define amdgpu_kernel void @merge_global_store_2_constants_i32(i32 addrspace(1)* %out) #0 {<br>
<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
llvm-commits@lists.llvm.org<br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</div>
</span></font></div>
</body>
</html>