[llvm-branch-commits] [llvm-branch] r168443 - in /llvm/branches/release_32: ./ lib/Transforms/Scalar/SROA.cpp test/Transforms/SROA/basictest.ll test/Transforms/SROA/phi-and-select.ll test/Transforms/SROA/vector-promotion.ll

Pawel Wodnicki pawel at 32bitmicro.com
Wed Nov 21 10:28:32 PST 2012


Author: pawel
Date: Wed Nov 21 12:28:32 2012
New Revision: 168443

URL: http://llvm.org/viewvc/llvm-project?rev=168443&view=rev
Log:
Merging r168361, r168346 and r168227 into 3.2 branch release

Merging r168361: 

Fix PR14132 and handle OOB loads speculated throuh PHI nodes.

The issue is that we may end up with newly OOB loads when speculating
a load into the predecessors of a PHI node, and this confuses the new
integer splitting logic in some cases, triggering an assertion failure.
In fact, the branch in question must be dead code as it loads from
a too-narrow alloca. Add code to handle this gracefully and leave the
requisite FIXMEs for both optimizing more aggressively and doing more to
aid sanitizing invalid code which triggers these patterns.


Merging r168346:
------------------------------------------------------------------------

Rework the rewriting of loads and stores for vector and integer allocas
to properly handle the combinations of these with split integer loads
and stores. This essentially replaces Evan's r168227 by refactoring the
code in a different way, and trynig to mirror that refactoring in both
the load and store sides of the rewriting.

Generally speaking there was some really problematic duplicated code
here that led to poorly founded assumptions and then subtle bugs. Now
much of the code actually flows through and follows a more consistent
style and logical path. There is still a tiny bit of duplication on the
store side of things, but it is much less bad.

This also changes the logic to never re-use a load or store instruction
as that was simply too error prone in practice.

I've added a few tests (one a reduction of the one in Evan's original
patch, which happened to be the same as the report in PR14349). I'm
going to look at adding a few more tests for things I found and fixed in
passing (such as the volatile tests in the vectorizable predicate).

This patch has survived bootstrap, and modulo one bugfix survived
Duncan's test suite, but let me know if anything else explodes.


Merging r168227:

Teach SROA rewriteVectorizedStoreInst to handle cases when the loaded value is narrower than the stored value. rdar://12713675


Modified:
    llvm/branches/release_32/   (props changed)
    llvm/branches/release_32/lib/Transforms/Scalar/SROA.cpp
    llvm/branches/release_32/test/Transforms/SROA/basictest.ll
    llvm/branches/release_32/test/Transforms/SROA/phi-and-select.ll
    llvm/branches/release_32/test/Transforms/SROA/vector-promotion.ll

Propchange: llvm/branches/release_32/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Wed Nov 21 12:28:32 2012
@@ -1,3 +1,3 @@
 /llvm/branches/Apple/Pertwee:110850,110961
 /llvm/branches/type-system-rewrite:133420-134817
-/llvm/trunk:155241,167718-167719,167731,167737,167743,167750,167784,167811,167817,167855,167860-167864,167875,167942,167948,167966,168001,168189,168197-168198,168316,168319,168352
+/llvm/trunk:155241,167718-167719,167731,167737,167743,167750,167784,167811,167817,167855,167860-167864,167875,167942,167948,167966,168001,168189,168197-168198,168227,168316,168319,168346,168352,168361

Modified: llvm/branches/release_32/lib/Transforms/Scalar/SROA.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/branches/release_32/lib/Transforms/Scalar/SROA.cpp?rev=168443&r1=168442&r2=168443&view=diff
==============================================================================
--- llvm/branches/release_32/lib/Transforms/Scalar/SROA.cpp (original)
+++ llvm/branches/release_32/lib/Transforms/Scalar/SROA.cpp Wed Nov 21 12:28:32 2012
@@ -568,6 +568,10 @@
 
     // Clamp the end offset to the end of the allocation. Note that this is
     // formulated to handle even the case where "BeginOffset + Size" overflows.
+    // NOTE! This may appear superficially to be something we could ignore
+    // entirely, but that is not so! There may be PHI-node uses where some
+    // instructions are dead but not others. We can't completely ignore the
+    // PHI node, and so have to record at least the information here.
     assert(AllocSize >= BeginOffset); // Established above.
     if (Size > AllocSize - BeginOffset) {
       DEBUG(dbgs() << "WARNING: Clamping a " << Size << " byte use @" << Offset
@@ -1382,11 +1386,7 @@
   /// \brief A collection of instructions to delete.
   /// We try to batch deletions to simplify code and make things a bit more
   /// efficient.
-  SmallVector<Instruction *, 8> DeadInsts;
-
-  /// \brief A set to prevent repeatedly marking an instruction split into many
-  /// uses as dead. Only used to guard insertion into DeadInsts.
-  SmallPtrSet<Instruction *, 4> DeadSplitInsts;
+  SetVector<Instruction *, SmallVector<Instruction *, 8> > DeadInsts;
 
   /// \brief Post-promotion worklist.
   ///
@@ -1573,7 +1573,7 @@
     do {
       LoadInst *LI = Loads.pop_back_val();
       LI->replaceAllUsesWith(NewPN);
-      Pass.DeadInsts.push_back(LI);
+      Pass.DeadInsts.insert(LI);
     } while (!Loads.empty());
 
     // Inject loads into all of the pred blocks.
@@ -1717,7 +1717,7 @@
 
       DEBUG(dbgs() << "          speculated to: " << *V << "\n");
       LI->replaceAllUsesWith(V);
-      Pass.DeadInsts.push_back(LI);
+      Pass.DeadInsts.insert(LI);
     }
   }
 };
@@ -2134,8 +2134,13 @@
     } else if (I->U->get()->getType()->getPointerElementType()->isStructTy()) {
       // Disable vector promotion when there are loads or stores of an FCA.
       return false;
-    } else if (!isa<LoadInst>(I->U->getUser()) &&
-               !isa<StoreInst>(I->U->getUser())) {
+    } else if (LoadInst *LI = dyn_cast<LoadInst>(I->U->getUser())) {
+      if (LI->isVolatile())
+        return false;
+    } else if (StoreInst *SI = dyn_cast<StoreInst>(I->U->getUser())) {
+      if (SI->isVolatile())
+        return false;
+    } else {
       return false;
     }
   }
@@ -2241,18 +2246,23 @@
 static Value *extractInteger(const DataLayout &DL, IRBuilder<> &IRB, Value *V,
                              IntegerType *Ty, uint64_t Offset,
                              const Twine &Name) {
+  DEBUG(dbgs() << "       start: " << *V << "\n");
   IntegerType *IntTy = cast<IntegerType>(V->getType());
   assert(DL.getTypeStoreSize(Ty) + Offset <= DL.getTypeStoreSize(IntTy) &&
          "Element extends past full value");
   uint64_t ShAmt = 8*Offset;
   if (DL.isBigEndian())
     ShAmt = 8*(DL.getTypeStoreSize(IntTy) - DL.getTypeStoreSize(Ty) - Offset);
-  if (ShAmt)
+  if (ShAmt) {
     V = IRB.CreateLShr(V, ShAmt, Name + ".shift");
+    DEBUG(dbgs() << "     shifted: " << *V << "\n");
+  }
   assert(Ty->getBitWidth() <= IntTy->getBitWidth() &&
          "Cannot extract to a larger integer!");
-  if (Ty != IntTy)
+  if (Ty != IntTy) {
     V = IRB.CreateTrunc(V, Ty, Name + ".trunc");
+    DEBUG(dbgs() << "     trunced: " << *V << "\n");
+  }
   return V;
 }
 
@@ -2262,20 +2272,27 @@
   IntegerType *Ty = cast<IntegerType>(V->getType());
   assert(Ty->getBitWidth() <= IntTy->getBitWidth() &&
          "Cannot insert a larger integer!");
-  if (Ty != IntTy)
+  DEBUG(dbgs() << "       start: " << *V << "\n");
+  if (Ty != IntTy) {
     V = IRB.CreateZExt(V, IntTy, Name + ".ext");
+    DEBUG(dbgs() << "    extended: " << *V << "\n");
+  }
   assert(DL.getTypeStoreSize(Ty) + Offset <= DL.getTypeStoreSize(IntTy) &&
          "Element store outside of alloca store");
   uint64_t ShAmt = 8*Offset;
   if (DL.isBigEndian())
     ShAmt = 8*(DL.getTypeStoreSize(IntTy) - DL.getTypeStoreSize(Ty) - Offset);
-  if (ShAmt)
+  if (ShAmt) {
     V = IRB.CreateShl(V, ShAmt, Name + ".shift");
+    DEBUG(dbgs() << "     shifted: " << *V << "\n");
+  }
 
   if (ShAmt || Ty->getBitWidth() < IntTy->getBitWidth()) {
     APInt Mask = ~Ty->getMask().zext(IntTy->getBitWidth()).shl(ShAmt);
     Old = IRB.CreateAnd(Old, Mask, Name + ".mask");
+    DEBUG(dbgs() << "      masked: " << *Old << "\n");
     V = IRB.CreateOr(Old, V, Name + ".insert");
+    DEBUG(dbgs() << "    inserted: " << *V << "\n");
   }
   return V;
 }
@@ -2442,30 +2459,21 @@
   void deleteIfTriviallyDead(Value *V) {
     Instruction *I = cast<Instruction>(V);
     if (isInstructionTriviallyDead(I))
-      Pass.DeadInsts.push_back(I);
+      Pass.DeadInsts.insert(I);
   }
 
-  bool rewriteVectorizedLoadInst(IRBuilder<> &IRB, LoadInst &LI, Value *OldOp) {
-    Value *Result;
+  Value *rewriteVectorizedLoadInst(IRBuilder<> &IRB, LoadInst &LI, Value *OldOp) {
+    Value *V = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
+                                     getName(".load"));
     if (LI.getType() == VecTy->getElementType() ||
         BeginOffset > NewAllocaBeginOffset || EndOffset < NewAllocaEndOffset) {
-      Result = IRB.CreateExtractElement(
-        IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(), getName(".load")),
-        getIndex(IRB, BeginOffset), getName(".extract"));
-    } else {
-      Result = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
-                                     getName(".load"));
+      V = IRB.CreateExtractElement(V, getIndex(IRB, BeginOffset),
+                                   getName(".extract"));
     }
-    if (Result->getType() != LI.getType())
-      Result = convertValue(TD, IRB, Result, LI.getType());
-    LI.replaceAllUsesWith(Result);
-    Pass.DeadInsts.push_back(&LI);
-
-    DEBUG(dbgs() << "          to: " << *Result << "\n");
-    return true;
+    return V;
   }
 
-  bool rewriteIntegerLoad(IRBuilder<> &IRB, LoadInst &LI) {
+  Value *rewriteIntegerLoad(IRBuilder<> &IRB, LoadInst &LI) {
     assert(IntTy && "We cannot insert an integer to the alloca");
     assert(!LI.isVolatile());
     Value *V = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
@@ -2473,12 +2481,10 @@
     V = convertValue(TD, IRB, V, IntTy);
     assert(BeginOffset >= NewAllocaBeginOffset && "Out of bounds offset");
     uint64_t Offset = BeginOffset - NewAllocaBeginOffset;
-    V = extractInteger(TD, IRB, V, cast<IntegerType>(LI.getType()), Offset,
-                       getName(".extract"));
-    LI.replaceAllUsesWith(V);
-    Pass.DeadInsts.push_back(&LI);
-    DEBUG(dbgs() << "          to: " << *V << "\n");
-    return true;
+    if (Offset > 0 || EndOffset < NewAllocaEndOffset)
+      V = extractInteger(TD, IRB, V, cast<IntegerType>(LI.getType()), Offset,
+                         getName(".extract"));
+    return V;
   }
 
   bool visitLoadInst(LoadInst &LI) {
@@ -2488,7 +2494,46 @@
     IRBuilder<> IRB(&LI);
 
     uint64_t Size = EndOffset - BeginOffset;
-    if (Size < TD.getTypeStoreSize(LI.getType())) {
+    bool IsSplitIntLoad = Size < TD.getTypeStoreSize(LI.getType());
+
+    // If this memory access can be shown to *statically* extend outside the
+    // bounds of the original allocation it's behavior is undefined. Rather
+    // than trying to transform it, just replace it with undef.
+    // FIXME: We should do something more clever for functions being
+    // instrumented by asan.
+    // FIXME: Eventually, once ASan and friends can flush out bugs here, this
+    // should be transformed to a load of null making it unreachable.
+    uint64_t OldAllocSize = TD.getTypeAllocSize(OldAI.getAllocatedType());
+    if (TD.getTypeStoreSize(LI.getType()) > OldAllocSize) {
+      LI.replaceAllUsesWith(UndefValue::get(LI.getType()));
+      Pass.DeadInsts.insert(&LI);
+      deleteIfTriviallyDead(OldOp);
+      DEBUG(dbgs() << "          to: undef!!\n");
+      return true;
+    }
+
+    Type *TargetTy = IsSplitIntLoad ? Type::getIntNTy(LI.getContext(), Size * 8)
+                                    : LI.getType();
+    bool IsPtrAdjusted = false;
+    Value *V;
+    if (VecTy) {
+      V = rewriteVectorizedLoadInst(IRB, LI, OldOp);
+    } else if (IntTy && LI.getType()->isIntegerTy()) {
+      V = rewriteIntegerLoad(IRB, LI);
+    } else if (BeginOffset == NewAllocaBeginOffset &&
+               canConvertValue(TD, NewAllocaTy, LI.getType())) {
+      V = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
+                                LI.isVolatile(), getName(".load"));
+    } else {
+      Type *LTy = TargetTy->getPointerTo();
+      V = IRB.CreateAlignedLoad(getAdjustedAllocaPtr(IRB, LTy),
+                                getPartitionTypeAlign(TargetTy),
+                                LI.isVolatile(), getName(".load"));
+      IsPtrAdjusted = true;
+    }
+    V = convertValue(TD, IRB, V, TargetTy);
+
+    if (IsSplitIntLoad) {
       assert(!LI.isVolatile());
       assert(LI.getType()->isIntegerTy() &&
              "Only integer type loads and stores are split");
@@ -2498,21 +2543,8 @@
       assert(LI.getType()->getIntegerBitWidth() ==
              TD.getTypeAllocSizeInBits(OldAI.getAllocatedType()) &&
              "Only alloca-wide loads can be split and recomposed");
-      IntegerType *NarrowTy = Type::getIntNTy(LI.getContext(), Size * 8);
-      bool IsConvertable = (BeginOffset - NewAllocaBeginOffset == 0) &&
-                           canConvertValue(TD, NewAllocaTy, NarrowTy);
-      Value *V;
       // Move the insertion point just past the load so that we can refer to it.
       IRB.SetInsertPoint(llvm::next(BasicBlock::iterator(&LI)));
-      if (IsConvertable)
-        V = convertValue(TD, IRB,
-                         IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
-                                               getName(".load")),
-                         NarrowTy);
-      else
-        V = IRB.CreateAlignedLoad(
-          getAdjustedAllocaPtr(IRB, NarrowTy->getPointerTo()),
-          getPartitionTypeAlign(NarrowTy), getName(".load"));
       // Create a placeholder value with the same type as LI to use as the
       // basis for the new value. This allows us to replace the uses of LI with
       // the computed value, and then replace the placeholder with LI, leaving
@@ -2524,44 +2556,18 @@
       LI.replaceAllUsesWith(V);
       Placeholder->replaceAllUsesWith(&LI);
       delete Placeholder;
-      if (Pass.DeadSplitInsts.insert(&LI))
-        Pass.DeadInsts.push_back(&LI);
-      DEBUG(dbgs() << "          to: " << *V << "\n");
-      return IsConvertable;
+    } else {
+      LI.replaceAllUsesWith(V);
     }
 
-    if (VecTy)
-      return rewriteVectorizedLoadInst(IRB, LI, OldOp);
-    if (IntTy && LI.getType()->isIntegerTy())
-      return rewriteIntegerLoad(IRB, LI);
-
-    if (BeginOffset == NewAllocaBeginOffset &&
-        canConvertValue(TD, NewAllocaTy, LI.getType())) {
-      Value *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
-                                           LI.isVolatile(), getName(".load"));
-      Value *NewV = convertValue(TD, IRB, NewLI, LI.getType());
-      LI.replaceAllUsesWith(NewV);
-      Pass.DeadInsts.push_back(&LI);
-
-      DEBUG(dbgs() << "          to: " << *NewLI << "\n");
-      return !LI.isVolatile();
-    }
-
-    assert(!IntTy && "Invalid load found with int-op widening enabled");
-
-    Value *NewPtr = getAdjustedAllocaPtr(IRB,
-                                         LI.getPointerOperand()->getType());
-    LI.setOperand(0, NewPtr);
-    LI.setAlignment(getPartitionTypeAlign(LI.getType()));
-    DEBUG(dbgs() << "          to: " << LI << "\n");
-
+    Pass.DeadInsts.insert(&LI);
     deleteIfTriviallyDead(OldOp);
-    return NewPtr == &NewAI && !LI.isVolatile();
+    DEBUG(dbgs() << "          to: " << *V << "\n");
+    return !LI.isVolatile() && !IsPtrAdjusted;
   }
 
-  bool rewriteVectorizedStoreInst(IRBuilder<> &IRB, StoreInst &SI,
-                                  Value *OldOp) {
-    Value *V = SI.getValueOperand();
+  bool rewriteVectorizedStoreInst(IRBuilder<> &IRB, Value *V,
+                                  StoreInst &SI, Value *OldOp) {
     if (V->getType() == ElementTy ||
         BeginOffset > NewAllocaBeginOffset || EndOffset < NewAllocaEndOffset) {
       if (V->getType() != ElementTy)
@@ -2574,17 +2580,16 @@
       V = convertValue(TD, IRB, V, VecTy);
     }
     StoreInst *Store = IRB.CreateAlignedStore(V, &NewAI, NewAI.getAlignment());
-    Pass.DeadInsts.push_back(&SI);
+    Pass.DeadInsts.insert(&SI);
 
     (void)Store;
     DEBUG(dbgs() << "          to: " << *Store << "\n");
     return true;
   }
 
-  bool rewriteIntegerStore(IRBuilder<> &IRB, StoreInst &SI) {
+  bool rewriteIntegerStore(IRBuilder<> &IRB, Value *V, StoreInst &SI) {
     assert(IntTy && "We cannot extract an integer from the alloca");
     assert(!SI.isVolatile());
-    Value *V = SI.getValueOperand();
     if (TD.getTypeSizeInBits(V->getType()) != IntTy->getBitWidth()) {
       Value *Old = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
                                          getName(".oldload"));
@@ -2596,7 +2601,7 @@
     }
     V = convertValue(TD, IRB, V, NewAllocaTy);
     StoreInst *Store = IRB.CreateAlignedStore(V, &NewAI, NewAI.getAlignment());
-    Pass.DeadInsts.push_back(&SI);
+    Pass.DeadInsts.insert(&SI);
     (void)Store;
     DEBUG(dbgs() << "          to: " << *Store << "\n");
     return true;
@@ -2608,74 +2613,53 @@
     assert(OldOp == OldPtr);
     IRBuilder<> IRB(&SI);
 
-    if (VecTy)
-      return rewriteVectorizedStoreInst(IRB, SI, OldOp);
-    Type *ValueTy = SI.getValueOperand()->getType();
+    Value *V = SI.getValueOperand();
+
+    // Strip all inbounds GEPs and pointer casts to try to dig out any root
+    // alloca that should be re-examined after promoting this alloca.
+    if (V->getType()->isPointerTy())
+      if (AllocaInst *AI = dyn_cast<AllocaInst>(V->stripInBoundsOffsets()))
+        Pass.PostPromotionWorklist.insert(AI);
 
     uint64_t Size = EndOffset - BeginOffset;
-    if (Size < TD.getTypeStoreSize(ValueTy)) {
+    if (Size < TD.getTypeStoreSize(V->getType())) {
       assert(!SI.isVolatile());
-      assert(ValueTy->isIntegerTy() &&
+      assert(V->getType()->isIntegerTy() &&
              "Only integer type loads and stores are split");
-      assert(ValueTy->getIntegerBitWidth() ==
-             TD.getTypeStoreSizeInBits(ValueTy) &&
+      assert(V->getType()->getIntegerBitWidth() ==
+             TD.getTypeStoreSizeInBits(V->getType()) &&
              "Non-byte-multiple bit width");
-      assert(ValueTy->getIntegerBitWidth() ==
+      assert(V->getType()->getIntegerBitWidth() ==
              TD.getTypeSizeInBits(OldAI.getAllocatedType()) &&
              "Only alloca-wide stores can be split and recomposed");
       IntegerType *NarrowTy = Type::getIntNTy(SI.getContext(), Size * 8);
-      Value *V = extractInteger(TD, IRB, SI.getValueOperand(), NarrowTy,
-                                BeginOffset, getName(".extract"));
-      StoreInst *NewSI;
-      bool IsConvertable = (BeginOffset - NewAllocaBeginOffset == 0) &&
-                           canConvertValue(TD, NarrowTy, NewAllocaTy);
-      if (IsConvertable)
-        NewSI = IRB.CreateAlignedStore(convertValue(TD, IRB, V, NewAllocaTy),
-                                       &NewAI, NewAI.getAlignment());
-      else
-        NewSI = IRB.CreateAlignedStore(
-          V, getAdjustedAllocaPtr(IRB, NarrowTy->getPointerTo()),
-          getPartitionTypeAlign(NarrowTy));
-      (void)NewSI;
-      if (Pass.DeadSplitInsts.insert(&SI))
-        Pass.DeadInsts.push_back(&SI);
-
-      DEBUG(dbgs() << "          to: " << *NewSI << "\n");
-      return IsConvertable;
+      V = extractInteger(TD, IRB, V, NarrowTy, BeginOffset,
+                         getName(".extract"));
     }
 
-    if (IntTy && ValueTy->isIntegerTy())
-      return rewriteIntegerStore(IRB, SI);
-
-    // Strip all inbounds GEPs and pointer casts to try to dig out any root
-    // alloca that should be re-examined after promoting this alloca.
-    if (ValueTy->isPointerTy())
-      if (AllocaInst *AI = dyn_cast<AllocaInst>(SI.getValueOperand()
-                                                  ->stripInBoundsOffsets()))
-        Pass.PostPromotionWorklist.insert(AI);
+    if (VecTy)
+      return rewriteVectorizedStoreInst(IRB, V, SI, OldOp);
+    if (IntTy && V->getType()->isIntegerTy())
+      return rewriteIntegerStore(IRB, V, SI);
 
+    StoreInst *NewSI;
     if (BeginOffset == NewAllocaBeginOffset &&
-        canConvertValue(TD, ValueTy, NewAllocaTy)) {
-      Value *NewV = convertValue(TD, IRB, SI.getValueOperand(), NewAllocaTy);
-      StoreInst *NewSI = IRB.CreateAlignedStore(NewV, &NewAI, NewAI.getAlignment(),
-                                                SI.isVolatile());
-      (void)NewSI;
-      Pass.DeadInsts.push_back(&SI);
-
-      DEBUG(dbgs() << "          to: " << *NewSI << "\n");
-      return !SI.isVolatile();
-    }
-
-    assert(!IntTy && "Invalid store found with int-op widening enabled");
-
-    Value *NewPtr = getAdjustedAllocaPtr(IRB,
-                                         SI.getPointerOperand()->getType());
-    SI.setOperand(1, NewPtr);
-    SI.setAlignment(getPartitionTypeAlign(SI.getValueOperand()->getType()));
-    DEBUG(dbgs() << "          to: " << SI << "\n");
-
+        canConvertValue(TD, V->getType(), NewAllocaTy)) {
+      V = convertValue(TD, IRB, V, NewAllocaTy);
+      NewSI = IRB.CreateAlignedStore(V, &NewAI, NewAI.getAlignment(),
+                                     SI.isVolatile());
+    } else {
+      Value *NewPtr = getAdjustedAllocaPtr(IRB, V->getType()->getPointerTo());
+      NewSI = IRB.CreateAlignedStore(V, NewPtr,
+                                     getPartitionTypeAlign(V->getType()),
+                                     SI.isVolatile());
+    }
+    (void)NewSI;
+    Pass.DeadInsts.insert(&SI);
     deleteIfTriviallyDead(OldOp);
-    return NewPtr == &NewAI && !SI.isVolatile();
+
+    DEBUG(dbgs() << "          to: " << *NewSI << "\n");
+    return NewSI->getPointerOperand() == &NewAI && !SI.isVolatile();
   }
 
   bool visitMemSetInst(MemSetInst &II) {
@@ -2695,8 +2679,7 @@
     }
 
     // Record this instruction for deletion.
-    if (Pass.DeadSplitInsts.insert(&II))
-      Pass.DeadInsts.push_back(&II);
+    Pass.DeadInsts.insert(&II);
 
     Type *AllocaTy = NewAI.getAllocatedType();
     Type *ScalarTy = AllocaTy->getScalarType();
@@ -2852,8 +2835,7 @@
       return false;
     }
     // Record this instruction for deletion.
-    if (Pass.DeadSplitInsts.insert(&II))
-      Pass.DeadInsts.push_back(&II);
+    Pass.DeadInsts.insert(&II);
 
     bool IsWholeAlloca = BeginOffset == NewAllocaBeginOffset &&
                          EndOffset == NewAllocaEndOffset;
@@ -2963,8 +2945,7 @@
     assert(II.getArgOperand(1) == OldPtr);
 
     // Record this instruction for deletion.
-    if (Pass.DeadSplitInsts.insert(&II))
-      Pass.DeadInsts.push_back(&II);
+    Pass.DeadInsts.insert(&II);
 
     ConstantInt *Size
       = ConstantInt::get(cast<IntegerType>(II.getArgOperand(0)->getType()),
@@ -3533,7 +3514,7 @@
        DI != DE; ++DI) {
     Changed = true;
     (*DI)->replaceAllUsesWith(UndefValue::get((*DI)->getType()));
-    DeadInsts.push_back(*DI);
+    DeadInsts.insert(*DI);
   }
   for (AllocaPartitioning::dead_op_iterator DO = P.dead_op_begin(),
                                             DE = P.dead_op_end();
@@ -3544,7 +3525,7 @@
     if (Instruction *OldI = dyn_cast<Instruction>(OldV))
       if (isInstructionTriviallyDead(OldI)) {
         Changed = true;
-        DeadInsts.push_back(OldI);
+        DeadInsts.insert(OldI);
       }
   }
 
@@ -3565,7 +3546,6 @@
 /// We also record the alloca instructions deleted here so that they aren't
 /// subsequently handed to mem2reg to promote.
 void SROA::deleteDeadInstructions(SmallPtrSet<AllocaInst*, 4> &DeletedAllocas) {
-  DeadSplitInsts.clear();
   while (!DeadInsts.empty()) {
     Instruction *I = DeadInsts.pop_back_val();
     DEBUG(dbgs() << "Deleting dead instruction: " << *I << "\n");
@@ -3577,7 +3557,7 @@
         // Zero out the operand and see if it becomes trivially dead.
         *OI = 0;
         if (isInstructionTriviallyDead(U))
-          DeadInsts.push_back(U);
+          DeadInsts.insert(U);
       }
 
     if (AllocaInst *AI = dyn_cast<AllocaInst>(I))

Modified: llvm/branches/release_32/test/Transforms/SROA/basictest.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/branches/release_32/test/Transforms/SROA/basictest.ll?rev=168443&r1=168442&r2=168443&view=diff
==============================================================================
--- llvm/branches/release_32/test/Transforms/SROA/basictest.ll (original)
+++ llvm/branches/release_32/test/Transforms/SROA/basictest.ll Wed Nov 21 12:28:32 2012
@@ -1100,12 +1100,12 @@
   %imag = getelementptr inbounds { float, float }* %retval, i32 0, i32 1
   store float %phi.real, float* %real
   store float %phi.imag, float* %imag
+  ; CHECK-NEXT: %[[real_convert:.*]] = bitcast float %[[real]] to i32
   ; CHECK-NEXT: %[[imag_convert:.*]] = bitcast float %[[imag]] to i32
   ; CHECK-NEXT: %[[imag_ext:.*]] = zext i32 %[[imag_convert]] to i64
   ; CHECK-NEXT: %[[imag_shift:.*]] = shl i64 %[[imag_ext]], 32
   ; CHECK-NEXT: %[[imag_mask:.*]] = and i64 undef, 4294967295
   ; CHECK-NEXT: %[[imag_insert:.*]] = or i64 %[[imag_mask]], %[[imag_shift]]
-  ; CHECK-NEXT: %[[real_convert:.*]] = bitcast float %[[real]] to i32
   ; CHECK-NEXT: %[[real_ext:.*]] = zext i32 %[[real_convert]] to i64
   ; CHECK-NEXT: %[[real_mask:.*]] = and i64 %[[imag_insert]], -4294967296
   ; CHECK-NEXT: %[[real_insert:.*]] = or i64 %[[real_mask]], %[[real_ext]]

Modified: llvm/branches/release_32/test/Transforms/SROA/phi-and-select.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/branches/release_32/test/Transforms/SROA/phi-and-select.ll?rev=168443&r1=168442&r2=168443&view=diff
==============================================================================
--- llvm/branches/release_32/test/Transforms/SROA/phi-and-select.ll (original)
+++ llvm/branches/release_32/test/Transforms/SROA/phi-and-select.ll Wed Nov 21 12:28:32 2012
@@ -390,3 +390,38 @@
   %tmpcast.d.0 = select i1 undef, i32* %c, i32* %d.0
   br label %for.cond
 }
+
+define i64 @PR14132(i1 %flag) {
+; CHECK: @PR14132
+; Here we form a PHI-node by promoting the pointer alloca first, and then in
+; order to promote the other two allocas, we speculate the load of the
+; now-phi-node-pointer. In doing so we end up loading a 64-bit value from an i8
+; alloca, which is completely bogus. However, we were asserting on trying to
+; rewrite it. Now it is replaced with undef. Eventually we may replace it with
+; unrechable and even the CFG will go away here.
+entry:
+  %a = alloca i64
+  %b = alloca i8
+  %ptr = alloca i64*
+; CHECK-NOT: alloca
+
+  %ptr.cast = bitcast i64** %ptr to i8**
+  store i64 0, i64* %a
+  store i8 1, i8* %b
+  store i64* %a, i64** %ptr
+  br i1 %flag, label %if.then, label %if.end
+
+if.then:
+  store i8* %b, i8** %ptr.cast
+  br label %if.end
+
+if.end:
+  %tmp = load i64** %ptr
+  %result = load i64* %tmp
+; CHECK-NOT: store
+; CHECK-NOT: load
+; CHECK: %[[result:.*]] = phi i64 [ undef, %if.then ], [ 0, %entry ]
+
+  ret i64 %result
+; CHECK-NEXT: ret i64 %[[result]]
+}

Modified: llvm/branches/release_32/test/Transforms/SROA/vector-promotion.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/branches/release_32/test/Transforms/SROA/vector-promotion.ll?rev=168443&r1=168442&r2=168443&view=diff
==============================================================================
--- llvm/branches/release_32/test/Transforms/SROA/vector-promotion.ll (original)
+++ llvm/branches/release_32/test/Transforms/SROA/vector-promotion.ll Wed Nov 21 12:28:32 2012
@@ -220,3 +220,48 @@
   ret i32 %load
 ; CHECK: ret i32
 }
+
+define <2 x i8> @PR14349.1(i32 %x) {
+; CEHCK: @PR14349.1
+; The first testcase for broken SROA rewriting of split integer loads and
+; stores due to smaller vector loads and stores. This particular test ensures
+; that we can rewrite a split store of an integer to a store of a vector.
+entry:
+  %a = alloca i32
+; CHECK-NOT: alloca
+
+  store i32 %x, i32* %a
+; CHECK-NOT: store
+
+  %cast = bitcast i32* %a to <2 x i8>*
+  %vec = load <2 x i8>* %cast
+; CHECK-NOT: load
+
+  ret <2 x i8> %vec
+; CHECK: %[[trunc:.*]] = trunc i32 %x to i16
+; CHECK: %[[cast:.*]] = bitcast i16 %[[trunc]] to <2 x i8>
+; CHECK: ret <2 x i8> %[[cast]]
+}
+
+define i32 @PR14349.2(<2 x i8> %x) {
+; CEHCK: @PR14349.2
+; The first testcase for broken SROA rewriting of split integer loads and
+; stores due to smaller vector loads and stores. This particular test ensures
+; that we can rewrite a split load of an integer to a load of a vector.
+entry:
+  %a = alloca i32
+; CHECK-NOT: alloca
+
+  %cast = bitcast i32* %a to <2 x i8>*
+  store <2 x i8> %x, <2 x i8>* %cast
+; CHECK-NOT: store
+
+  %int = load i32* %a
+; CHECK-NOT: load
+
+  ret i32 %int
+; CHECK: %[[cast:.*]] = bitcast <2 x i8> %x to i16
+; CHECK: %[[trunc:.*]] = zext i16 %[[cast]] to i32
+; CHECK: %[[insert:.*]] = or i32 %{{.*}}, %[[trunc]]
+; CHECK: ret i32 %[[insert]]
+}





More information about the llvm-branch-commits mailing list