[all-commits] [llvm/llvm-project] ea4606: [AMDGPU] Update target helpers & GCNSchedStrategy ...

Diana Picus via All-commits all-commits at lists.llvm.org
Thu Mar 6 02:01:41 PST 2025


  Branch: refs/heads/users/rovka/dvgpr-5
  Home:   https://github.com/llvm/llvm-project
  Commit: ea460637afd43c854e70d184708ab0fcd2a20f73
      https://github.com/llvm/llvm-project/commit/ea460637afd43c854e70d184708ab0fcd2a20f73
  Author: Diana Picus <Diana-Magda.Picus at amd.com>
  Date:   2025-03-06 (Thu, 06 Mar 2025)

  Changed paths:
    M llvm/lib/Target/AMDGPU/AMDGPU.td
    M llvm/lib/Target/AMDGPU/GCNSchedStrategy.cpp
    M llvm/lib/Target/AMDGPU/GCNSubtarget.h
    M llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp
    M llvm/unittests/Target/AMDGPU/AMDGPUUnitTests.cpp
    M llvm/unittests/Target/AMDGPU/CMakeLists.txt

  Log Message:
  -----------
  [AMDGPU] Update target helpers & GCNSchedStrategy for dynamic VGPRs

In dynamic VGPR mode, we can allocate up to 8 blocks of either 16 or 32
VGPRs (based on a chip-wide setting which we can model with a Subtarget
feature). Update some of the subtarget helpers to reflect this.

In particular:
- getVGPRAllocGranule is set to the block size
- getAddresableNumVGPR will limit itself to 8 * size of a block

We also try to be more careful about how many VGPR blocks we allocate.
Therefore, when deciding if we should revert scheduling after a given
stage, we check that we haven't increased the number of VGPR blocks that
need to be allocated.



To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications


More information about the All-commits mailing list