[PATCH] D135137: [AggressiveInstCombine] Load merge the reverse load pattern and mixed load sizes.

Dave Green via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sun Oct 9 02:00:09 PDT 2022


dmgreen added a comment.

Nice addition. I find the recursion a bit difficult to reason through, so forgive my questions below.



================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:713
+  bool Reverse = false;
+  if (Offset2.getZExtValue() < Offset1.getZExtValue()) {
+    std::swap(LI1, LI2);
----------------
Offset2.slt(Offset1) will prevent the conversion to i64.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:724
   uint64_t LoadSize2 = LI2->getType()->getPrimitiveSizeInBits();
-  if (Load1Ptr != Load2Ptr || LoadSize1 != LoadSize2)
+  if (Load1Ptr != Load2Ptr)
     return false;
----------------
What happens if the load sizes are not the same? That sounds like it was protecting against a number of things.


================
Comment at: llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp:732
   // Alias Analysis to check for store b/w the loads.
-  MemoryLocation Loc = MemoryLocation::get(LI2);
-  unsigned NumScanned = 0;
-  for (Instruction &Inst : make_range(LI1->getIterator(), LI2->getIterator())) {
-    if (Inst.mayWriteToMemory() && isModSet(AA.getModRefInfo(&Inst, Loc)))
-      return false;
-    if (++NumScanned > MaxInstrsToScan)
-      return false;
+  if (LI1->comesBefore(LI2)) {
+    MemoryLocation Loc = MemoryLocation::get(LI2);
----------------
This looks like it just needs a Start and End, that are LI1/LI2 depending on which comesBefore. That could hopefully avoid the duplication.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D135137/new/

https://reviews.llvm.org/D135137



More information about the llvm-commits mailing list