[PATCH] D144445: [AggressiveInstCombine] folding load for constant global patterened arrays and structs

Nikita Popov via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 8 13:00:03 PST 2023


nikic added inline comments.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:876
+    };
+    g = DL.getTypeStoreSize(GEP->getSourceElementType());
+    Value *V = GEP;
----------------
This isn't the right way to compute the GEP stride: GEP works on alloc sizes rather than store sizes, and can have multiple indices. You can use the `collectOffset()` methods to handle all the cases, and then `GreatestCommonDivisor()` to calculate the GCD.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:894
+  if (!LI)
+    return false;
+
----------------
This needs to bail out for volatile loads.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:901
+  if (!GV || !GV->isConstant() ||
+      (!GV->hasDefinitiveInitializer() && !GV->hasUniqueInitializer()))
+    return false;
----------------
This is still checking for unique initializers -- should only check definitive initializer.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:909
+  if (GV->getAlign().valueOrOne().value() < LoadAlign)
+    return false;
+  Constant *C = GV->getInitializer();
----------------
Even if we can't use the alignment, we can still use the GEP stride.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:921
+  Constant *Ca = ConstantFoldLoadFromConst(
+      C, LoadTy, APInt(DL.getTypeSizeInBits(C->getType()), 0), DL);
+
----------------
Can just omit the Offset argument here (it's zero by default).


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:929
+
+  for (uint64_t ByteOffset = Stride, E = GVSize - LoadSize; ByteOffset <= E;
+       ByteOffset += Stride)
----------------
Do we need the `LoadSize` adjustment here? Can we instead iterate to `ByteOffset < GVSize`?

The current `LoadSize` calculation is not right for pointers, but it seem like we can just drop it entirely.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:933
+                  C, LoadTy,
+                  APInt(DL.getTypeSizeInBits(C->getType()), ByteOffset), DL))
+      return false;
----------------
It's okay to just use 64 as the APInt size in this context. You are currently using the size of the initializer, which will make for a *very* wide integer...


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D144445/new/

https://reviews.llvm.org/D144445



More information about the llvm-commits mailing list