[PATCH] D70379: [AMDGPU] Reserving VGPR for future SGPR Spill

Matt Arsenault via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue May 5 07:31:10 PDT 2020


arsenm added inline comments.


================
Comment at: llvm/lib/Target/AMDGPU/SIISelLowering.cpp:10862
+  // Allocate a VGPR for future SGPR Spill if
+  // "amdgpu-reserve-vgpr-for-sgpr-spill" option is used
+  if (VGPRReserveforSGPRSpill && !Info->VGPRReservedForSGPRSpill &&
----------------
Should add a note that this is a hack; if we split SGPR allocation from VGPR we shouldn't need to do this


================
Comment at: llvm/lib/Target/AMDGPU/SILowerSGPRSpills.cpp:245
+  SIMachineFunctionInfo *FuncInfo = MF.getInfo<SIMachineFunctionInfo>();
+  Register LowestAvailableVGPR = AMDGPU::NoRegister, ReservedVGPR;
+  ArrayRef<MCPhysReg> AllVGPR32s = getAllVGPR32(ST, MF);
----------------
Don't need = NoRegister


================
Comment at: llvm/lib/Target/AMDGPU/SILowerSGPRSpills.cpp:254
+
+  if (LowestAvailableVGPR != AMDGPU::NoRegister) {
+    ReservedVGPR = FuncInfo->VGPRReservedForSGPRSpill;
----------------
Don't need != NoRegister. Also can invert and return early


================
Comment at: llvm/lib/Target/AMDGPU/SILowerSGPRSpills.cpp:258
+    int i = 0;
+    for (auto &Reg : FuncInfo->getSGPRSpillVGPRs()) {
+      if (Reg.VGPR == ReservedVGPR) {
----------------
doesn't need to be a reference


================
Comment at: llvm/lib/Target/AMDGPU/SILowerSGPRSpills.cpp:261-263
+          if (MBB.isLiveIn(ReservedVGPR)) {
+            MBB.removeLiveIn(ReservedVGPR);
+          }
----------------
You can just unconditionally removeLiveIn. The isLiveIn is just a redundant map lookup


================
Comment at: llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp:543-544
+
+bool SIMachineFunctionInfo::resetSGPRSpillVGPRs(Register ReservedVGPR,
+                                                MachineFunction &MF) {
+  for (auto *i = SpillVGPRs.begin(); i < SpillVGPRs.end(); i++) {
----------------
Should get a comment about what this is for. A better name might be removeVGPRForSGPRSpill?


================
Comment at: llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp:545
+                                                MachineFunction &MF) {
+  for (auto *i = SpillVGPRs.begin(); i < SpillVGPRs.end(); i++) {
+    if (i->VGPR == ReservedVGPR) {
----------------
++i


================
Comment at: llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp:550-551
+      for (MachineBasicBlock &MBB : MF) {
+        if (MBB.isLiveIn(ReservedVGPR))
+          MBB.removeLiveIn(ReservedVGPR);
+        MBB.sortUniqueLiveIns();
----------------
Can unconditionally removeLiveIn


================
Comment at: llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp:552
+          MBB.removeLiveIn(ReservedVGPR);
+        MBB.sortUniqueLiveIns();
+      }
----------------
You're going to end up calling sortUniqueLiveIns for every block, for every spill VGPR. The common case is only 1 spill VGPR, but you could collect all of the registers and do one walk through the function


================
Comment at: llvm/test/CodeGen/AMDGPU/reserve-vgpr-for-sgpr-spill.ll:12
+; GCN: ; NumVgprs: 256
+define fastcc i32 @parent_func(i32 %0, i32 %1, [255 x i32] %b) #1 {
+entry:
----------------
saiislam wrote:
> arsenm wrote:
> > I don't see how this test really stresses the need for SGPR spilling with no free VGPRs. I would expect this to look more like the tests in spill-wide-sgpr.ll, or spill-scavenge-offset.ll to force a high pressure
> The arguments of the function are taking up all the 256 VGPRs, so FP (s34) and SGPRs (s30 and s31) are getting spilled into reserved VGPR (v32), and restored later.
> Tests in spill-wide-sgpr.ll are also enforcing 2,4,8 way spilling of SGPR, which I can do here by modifying the body of parent_func while keeping the function signature of both functions. Will it be ok then?
This won't use all 256 VGPRs. The arguments go to the stack after > 32. It happens to look like an external call looks like it uses a lot of registers, but this is likely to change in the future. A more reliable way would be to use an ugly blob of asm, like


```
  call void asm sideeffect "",
    "~{v0},~{v1},~{v2},~{v3},~{v4},~{v5},~{v6},~{v7},~{v8},~{v9}
    ,~{v10},~{v11},~{v12},~{v13},~{v14},~{v15},~{v16},~{v17},~{v18},~{v19}
    ,~{v20},~{v21},~{v22},~{v23},~{v24},~{v25},~{v26},~{v27},~{v28},~{v29}
    ,~{v30},~{v31},~{v32},~{v33},~{v34},~{v35},~{v36},~{v37},~{v38},~{v39}
    ,~{v40},~{v41},~{v42},~{v43},~{v44},~{v45},~{v46},~{v47},~{v48},~{v49}
    ,~{v50},~{v51},~{v52},~{v53},~{v54},~{v55},~{v56},~{v57},~{v58},~{v59}
    ,~{v60},~{v61},~{v62},~{v63},~{v64},~{v65},~{v66},~{v67},~{v68},~{v69}
    ,~{v70},~{v71},~{v72},~{v73},~{v74},~{v75},~{v76},~{v77},~{v78},~{v79}
    ,~{v80},~{v81},~{v82},~{v83},~{v84},~{v85},~{v86},~{v87},~{v88},~{v89}
    ,~{v90},~{v91},~{v92},~{v93},~{v94},~{v95},~{v96},~{v97},~{v98},~{v99}
    ,~{v100},~{v101},~{v102},~{v103},~{v104},~{v105},~{v106},~{v107},~{v108},~{v109}
    ,~{v110},~{v111},~{v112},~{v113},~{v114},~{v115},~{v116},~{v117},~{v118},~{v119}
    ,~{v120},~{v121},~{v122},~{v123},~{v124},~{v125},~{v126},~{v127},~{v128},~{v129}
    ,~{v130},~{v131},~{v132},~{v133},~{v134},~{v135},~{v136},~{v137},~{v138},~{v139}
    ,~{v140},~{v141},~{v142},~{v143},~{v144},~{v145},~{v146},~{v147},~{v148},~{v149}
    ,~{v150},~{v151},~{v152},~{v153},~{v154},~{v155},~{v156},~{v157},~{v158},~{v159}
    ,~{v160},~{v161},~{v162},~{v163},~{v164},~{v165},~{v166},~{v167},~{v168},~{v169}
    ,~{v170},~{v171},~{v172},~{v173},~{v174},~{v175},~{v176},~{v177},~{v178},~{v179}
    ,~{v180},~{v181},~{v182},~{v183},~{v184},~{v185},~{v186},~{v187},~{v188},~{v189}
    ,~{v190},~{v191},~{v192},~{v193},~{v194},~{v195},~{v196},~{v197},~{v198},~{v199}
    ,~{v200},~{v201},~{v202},~{v203},~{v204},~{v205},~{v206},~{v207},~{v208},~{v209}
    ,~{v210},~{v211},~{v212},~{v213},~{v214},~{v215},~{v216},~{v217},~{v218},~{v219}
    ,~{v220},~{v221},~{v222},~{v223},~{v224},~{v225},~{v226},~{v227},~{v228},~{v229}
    ,~{v230},~{v231},~{v232},~{v233},~{v234},~{v235},~{v236},~{v237},~{v238},~{v239}
    ,~{v240},~{v241},~{v242},~{v243},~{v244},~{v245},~{v246},~{v247},~{v248},~{v249}
    ,~{v250},~{v251},~{v252},~{v253},~{v254},~{v255}"() #1

```

You can also use amdgpu_max_waves_per_eu to artifically limit the number of available VGPRs to reduce number of registers you have to list


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D70379/new/

https://reviews.llvm.org/D70379





More information about the llvm-commits mailing list