[PATCH] D150551: [AMDGPU] Reintroduce CC exception for non-inlined functions in Promote Alloca limits

Pierre van Houtryve via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon May 15 02:24:36 PDT 2023


Pierre-vh created this revision.
Pierre-vh added reviewers: foad, rampitec, arsenm.
Herald added subscribers: StephenFan, kerbowa, hiraditya, tpr, dstuttard, yaxunl, jvesely, kzhuravl.
Herald added a project: All.
Pierre-vh requested review of this revision.
Herald added subscribers: llvm-commits, wdng.
Herald added a project: LLVM.

This is basically a partial revert of https://reviews.llvm.org/D145586 ( fd1d60873fdc <https://reviews.llvm.org/rGfd1d60873fdce6e908c9865ddf925f2616fccd55> )

D145586 <https://reviews.llvm.org/D145586> was originally introduced to help with SWDEV-363662, and it did, but
it also caused a 25% drop in performance in
some MIOpen benchmarks where, it seems,
functions are inlined more conservatively.

This patch restores the pre-D145586 behavior
for PromoteAlloca: functions with a non-entry CC
have a 32 VGPRs threshold, but only if the function
is not marked with "alwaysinline".

A good number of AMDGPU code makes uses of
the AMDGPUAlwaysInline pass anyway, so in our
backend "alwaysinline" seems very common.

This change does not affect SWDEV-363662 (the motivating issue for introducing D145586 <https://reviews.llvm.org/D145586>).

Fixes SWDEV-399519


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D150551

Files:
  llvm/lib/Target/AMDGPU/AMDGPUPromoteAlloca.cpp
  llvm/test/CodeGen/AMDGPU/vector-alloca-limits.ll


Index: llvm/test/CodeGen/AMDGPU/vector-alloca-limits.ll
===================================================================
--- llvm/test/CodeGen/AMDGPU/vector-alloca-limits.ll
+++ llvm/test/CodeGen/AMDGPU/vector-alloca-limits.ll
@@ -139,11 +139,26 @@
 }
 
 ; OPT-LABEL: @func_alloca_9xi64_max256(
+; OPT: alloca
+; OPT-NOT: <9 x i64>
+; LIMIT32: alloca
+; LIMIT32-NOT: <9 x i64>
+define void @func_alloca_9xi64_max256(ptr addrspace(1) %out, i32 %index) #2 {
+entry:
+  %tmp = alloca [9 x i64], addrspace(5)
+  store i64 0, ptr addrspace(5) %tmp
+  %tmp1 = getelementptr [9 x i64], ptr addrspace(5) %tmp, i32 0, i32 %index
+  %tmp2 = load i64, ptr addrspace(5) %tmp1
+  store i64 %tmp2, ptr addrspace(1) %out
+  ret void
+}
+
+; OPT-LABEL: @alwaysinlined_func_alloca_9xi64_max256(
 ; OPT-NOT: alloca
 ; OPT: <9 x i64>
 ; LIMIT32: alloca
 ; LIMIT32-NOT: <9 x i64>
-define void @func_alloca_9xi64_max256(ptr addrspace(1) %out, i32 %index) #2 {
+define void @alwaysinlined_func_alloca_9xi64_max256(ptr addrspace(1) %out, i32 %index) #3 {
 entry:
   %tmp = alloca [9 x i64], addrspace(5)
   store i64 0, ptr addrspace(5) %tmp
@@ -156,3 +171,4 @@
 attributes #0 = { "amdgpu-flat-work-group-size"="1,1024" }
 attributes #1 = { "amdgpu-flat-work-group-size"="1,512" }
 attributes #2 = { "amdgpu-flat-work-group-size"="1,256" }
+attributes #3 = { alwaysinline "amdgpu-flat-work-group-size"="1,256" }
Index: llvm/lib/Target/AMDGPU/AMDGPUPromoteAlloca.cpp
===================================================================
--- llvm/lib/Target/AMDGPU/AMDGPUPromoteAlloca.cpp
+++ llvm/lib/Target/AMDGPU/AMDGPUPromoteAlloca.cpp
@@ -162,7 +162,15 @@
     return 128;
 
   const GCNSubtarget &ST = TM.getSubtarget<GCNSubtarget>(F);
-  return ST.getMaxNumVGPRs(ST.getWavesPerEU(F).first);
+  unsigned MaxVGPRs = ST.getMaxNumVGPRs(ST.getWavesPerEU(F).first);
+
+  // A non-entry function has only 32 caller preserved registers.
+  // Do not promote alloca which will force spilling unless we know the function
+  // will be inlined.
+  if (!F.hasFnAttribute(Attribute::AlwaysInline) &&
+      !AMDGPU::isEntryFunctionCC(F.getCallingConv()))
+    MaxVGPRs = std::min(MaxVGPRs, 32u);
+  return MaxVGPRs;
 }
 
 } // end anonymous namespace


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D150551.522098.patch
Type: text/x-patch
Size: 2221 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20230515/129696e3/attachment.bin>


More information about the llvm-commits mailing list