[llvm] [AMDGPU] Folding imm offset in more cases for scratch access (PR #70634)

via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 1 07:29:37 PDT 2023


================
@@ -1146,13 +1146,35 @@ bool AMDGPUDAGToDAGISel::isDSOffset2Legal(SDValue Base, unsigned Offset0,
   return CurDAG->SignBitIsZero(Base);
 }
 
-bool AMDGPUDAGToDAGISel::isFlatScratchBaseLegal(SDValue Base,
+// This is used to check whether the address of flat scratch load/store in the
+// form of `base1 + (base2) + immediate offset` is legal with respect to the
+// hardware's requirement that the SGPR/VGPR address offset should be unsigned.
+// The `base` parts are those that would go into SGPR or VGPR.
+// When value in 32-bit `base` can be negative calculate scratch offset using
+// 32-bit add instruction, otherwise use `base(unsigned) + offset`.
+bool AMDGPUDAGToDAGISel::isFlatScratchBaseLegal(SDValue FullAddr, SDValue Base1,
+                                                SDValue Base2,
                                                 uint64_t FlatVariant) const {
   if (FlatVariant != SIInstrFlags::FlatScratch)
     return true;
-  // When value in 32-bit Base can be negative calculate scratch offset using
-  // 32-bit add instruction, otherwise use Base(unsigned) + offset.
-  return CurDAG->SignBitIsZero(Base);
+
+  assert(Base1.getNode());
+  if (FullAddr.getOpcode() == ISD::ADD && !Base2.getNode()) {
----------------
ruiling wrote:

Thanks for the suggestion, it is easy to be checked, and see improvement in one test.

https://github.com/llvm/llvm-project/pull/70634


More information about the llvm-commits mailing list