[PATCH] D144445: [AggressiveInstCombine] folding load for constant global patterened arrays and structs

Kohei Asano via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Mar 9 01:53:08 PST 2023


khei4 planned changes to this revision.
khei4 added a comment.

Thank you for the review! I'm sorry about the messy changes! 
also add ptr array and constant offsets. https://reviews.llvm.org/D145355

I'm now wondering to handle pointers. And mentioned behavior for cross-bounding load and bigger load variable alignment. I'll figure it out!



================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:908-910
+  // TODO: how to handle pointer?
+  if (!Ca)
+    return false;
----------------
Current implementation cannot read ptr array


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:909-910
+  // TODO: how to handle pointer?
+  if (!Ca)
+    return false;
+
----------------
Currently 


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:876
+    };
+    g = DL.getTypeStoreSize(GEP->getSourceElementType());
+    Value *V = GEP;
----------------
nikic wrote:
> This isn't the right way to compute the GEP stride: GEP works on alloc sizes rather than store sizes, and can have multiple indices. You can use the `collectOffset()` methods to handle all the cases, and then `GreatestCommonDivisor()` to calculate the GCD.
> You can use the collectOffset() methods to handle all the cases, and then GreatestCommonDivisor() to calculate the GCD.

These methods save me! thanks!


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:901
+  if (!GV || !GV->isConstant() ||
+      (!GV->hasDefinitiveInitializer() && !GV->hasUniqueInitializer()))
+    return false;
----------------
nikic wrote:
> This is still checking for unique initializers -- should only check definitive initializer.
Thanks for saying again...


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:909
+  if (GV->getAlign().valueOrOne().value() < LoadAlign)
+    return false;
+  Constant *C = GV->getInitializer();
----------------
nikic wrote:
> Even if we can't use the alignment, we can still use the GEP stride.
Done. 
(just for my understanding, I'll clarify the case for bigger load alignment in discord.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:929
+
+  for (uint64_t ByteOffset = Stride, E = GVSize - LoadSize; ByteOffset <= E;
+       ByteOffset += Stride)
----------------
nikic wrote:
> Do we need the `LoadSize` adjustment here? Can we instead iterate to `ByteOffset < GVSize`?
> 
> The current `LoadSize` calculation is not right for pointers, but it seem like we can just drop it entirely.
> Do we need the LoadSize adjustment here? Can we instead iterate to ByteOffset < GVSize?

I think bound-crossing load cannot be excluded, without LoadSize.
like https://alive2.llvm.org/ce/z/jYsLBk
Sorry, I'm not confident with the semantics of this. I'll try to ask in discord.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:933
+                  C, LoadTy,
+                  APInt(DL.getTypeSizeInBits(C->getType()), ByteOffset), DL))
+      return false;
----------------
nikic wrote:
> It's okay to just use 64 as the APInt size in this context. You are currently using the size of the initializer, which will make for a *very* wide integer...
Thanks! It's embarrassing...


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D144445/new/

https://reviews.llvm.org/D144445



More information about the llvm-commits mailing list