[llvm] 2fe81ed - [NFC][RemoveDIs] Insert instruction using iterators in Transforms/
Jeremy Morse via llvm-commits
llvm-commits at lists.llvm.org
Tue Mar 5 07:20:26 PST 2024
Author: Jeremy Morse
Date: 2024-03-05T15:12:22Z
New Revision: 2fe81edef6f0b35abffbbc59b649b30ea9c15a62
URL: https://github.com/llvm/llvm-project/commit/2fe81edef6f0b35abffbbc59b649b30ea9c15a62
DIFF: https://github.com/llvm/llvm-project/commit/2fe81edef6f0b35abffbbc59b649b30ea9c15a62.diff
LOG: [NFC][RemoveDIs] Insert instruction using iterators in Transforms/
As part of the RemoveDIs project we need LLVM to insert instructions using
iterators wherever possible, so that the iterators can carry a bit of
debug-info. This commit implements some of that by updating the contents of
llvm/lib/Transforms/Utils to always use iterator-versions of instruction
constructors.
There are two general flavours of update:
* Almost all call-sites just call getIterator on an instruction
* Several make use of an existing iterator (scenarios where the code is
actually significant for debug-info)
The underlying logic is that any call to getFirstInsertionPt or similar
APIs that identify the start of a block need to have that iterator passed
directly to the insertion function, without being converted to a bare
Instruction pointer along the way.
Noteworthy changes:
* FindInsertedValue now takes an optional iterator rather than an
instruction pointer, as we need to always insert with iterators,
* I've added a few iterator-taking versions of some value-tracking and
DomTree methods -- they just unwrap the iterator. These are purely
convenience methods to avoid extra syntax in some passes.
* A few calls to getNextNode become std::next instead (to keep in the
theme of using iterators for positions),
* SeparateConstOffsetFromGEP has it's insertion-position field changed.
Noteworthy because it's not a purely localised spelling change.
All this should be NFC.
Added:
Modified:
llvm/include/llvm/Analysis/ValueTracking.h
llvm/include/llvm/IR/Dominators.h
llvm/include/llvm/Transforms/Scalar/Reassociate.h
llvm/lib/Analysis/ValueTracking.cpp
llvm/lib/Transforms/Coroutines/CoroElide.cpp
llvm/lib/Transforms/Coroutines/CoroSplit.cpp
llvm/lib/Transforms/Coroutines/Coroutines.cpp
llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
llvm/lib/Transforms/IPO/Attributor.cpp
llvm/lib/Transforms/IPO/AttributorAttributes.cpp
llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
llvm/lib/Transforms/IPO/GlobalOpt.cpp
llvm/lib/Transforms/IPO/IROutliner.cpp
llvm/lib/Transforms/IPO/OpenMPOpt.cpp
llvm/lib/Transforms/IPO/WholeProgramDevirt.cpp
llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
llvm/lib/Transforms/InstCombine/InstCombinePHI.cpp
llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
llvm/lib/Transforms/Instrumentation/KCFI.cpp
llvm/lib/Transforms/ObjCARC/ObjCARC.cpp
llvm/lib/Transforms/ObjCARC/ObjCARC.h
llvm/lib/Transforms/ObjCARC/ObjCARCContract.cpp
llvm/lib/Transforms/ObjCARC/ObjCARCOpts.cpp
llvm/lib/Transforms/Scalar/CorrelatedValuePropagation.cpp
llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
llvm/lib/Transforms/Scalar/DivRemPairs.cpp
llvm/lib/Transforms/Scalar/GVN.cpp
llvm/lib/Transforms/Scalar/GuardWidening.cpp
llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
llvm/lib/Transforms/Scalar/InferAddressSpaces.cpp
llvm/lib/Transforms/Scalar/JumpThreading.cpp
llvm/lib/Transforms/Scalar/LICM.cpp
llvm/lib/Transforms/Scalar/LoopFlatten.cpp
llvm/lib/Transforms/Scalar/LoopIdiomRecognize.cpp
llvm/lib/Transforms/Scalar/LoopLoadElimination.cpp
llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
llvm/lib/Transforms/Scalar/NaryReassociate.cpp
llvm/lib/Transforms/Scalar/NewGVN.cpp
llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
llvm/lib/Transforms/Scalar/Reassociate.cpp
llvm/lib/Transforms/Scalar/RewriteStatepointsForGC.cpp
llvm/lib/Transforms/Scalar/SROA.cpp
llvm/lib/Transforms/Scalar/SeparateConstOffsetFromGEP.cpp
llvm/lib/Transforms/Scalar/SimpleLoopUnswitch.cpp
llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
Removed:
################################################################################
diff --git a/llvm/include/llvm/Analysis/ValueTracking.h b/llvm/include/llvm/Analysis/ValueTracking.h
index 9cfb7af9dba0d7..3970efba18cc8c 100644
--- a/llvm/include/llvm/Analysis/ValueTracking.h
+++ b/llvm/include/llvm/Analysis/ValueTracking.h
@@ -596,10 +596,11 @@ Value *isBytewiseValue(Value *V, const DataLayout &DL);
/// indexed is already around as a register, for example if it were inserted
/// directly into the aggregate.
///
-/// If InsertBefore is not null, this function will duplicate (modified)
+/// If InsertBefore is not empty, this function will duplicate (modified)
/// insertvalues when a part of a nested struct is extracted.
-Value *FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
- Instruction *InsertBefore = nullptr);
+Value *FindInsertedValue(
+ Value *V, ArrayRef<unsigned> idx_range,
+ std::optional<BasicBlock::iterator> InsertBefore = std::nullopt);
/// Analyze the specified pointer to see if it can be expressed as a base
/// pointer plus a constant offset. Return the base and offset to the caller.
@@ -793,6 +794,15 @@ bool isSafeToSpeculativelyExecute(const Instruction *I,
const DominatorTree *DT = nullptr,
const TargetLibraryInfo *TLI = nullptr);
+inline bool
+isSafeToSpeculativelyExecute(const Instruction *I, BasicBlock::iterator CtxI,
+ AssumptionCache *AC = nullptr,
+ const DominatorTree *DT = nullptr,
+ const TargetLibraryInfo *TLI = nullptr) {
+ // Take an iterator, and unwrap it into an Instruction *.
+ return isSafeToSpeculativelyExecute(I, &*CtxI, AC, DT, TLI);
+}
+
/// This returns the same result as isSafeToSpeculativelyExecute if Opcode is
/// the actual opcode of Inst. If the provided and actual opcode
diff er, the
/// function (virtually) overrides the opcode of Inst with the provided
@@ -1019,6 +1029,15 @@ bool isGuaranteedNotToBePoison(const Value *V, AssumptionCache *AC = nullptr,
const DominatorTree *DT = nullptr,
unsigned Depth = 0);
+inline bool isGuaranteedNotToBePoison(const Value *V, AssumptionCache *AC,
+ BasicBlock::iterator CtxI,
+ const DominatorTree *DT = nullptr,
+ unsigned Depth = 0) {
+ // Takes an iterator as a position, passes down to Instruction *
+ // implementation.
+ return isGuaranteedNotToBePoison(V, AC, &*CtxI, DT, Depth);
+}
+
/// Returns true if V cannot be undef, but may be poison.
bool isGuaranteedNotToBeUndef(const Value *V, AssumptionCache *AC = nullptr,
const Instruction *CtxI = nullptr,
diff --git a/llvm/include/llvm/IR/Dominators.h b/llvm/include/llvm/IR/Dominators.h
index 42db4c4ea3a559..287f419f893dbd 100644
--- a/llvm/include/llvm/IR/Dominators.h
+++ b/llvm/include/llvm/IR/Dominators.h
@@ -189,9 +189,13 @@ class DominatorTree : public DominatorTreeBase<BasicBlock, false> {
/// like unreachable code or trivial phi cycles).
/// * Invoke Defs only dominate uses in their default destination.
bool dominates(const Value *Def, const Use &U) const;
+
/// Return true if value Def dominates all possible uses inside instruction
/// User. Same comments as for the Use-based API apply.
bool dominates(const Value *Def, const Instruction *User) const;
+ bool dominates(const Value *Def, BasicBlock::iterator User) const {
+ return dominates(Def, &*User);
+ }
/// Returns true if Def would dominate a use in any instruction in BB.
/// If Def is an instruction in BB, then Def does not dominate BB.
diff --git a/llvm/include/llvm/Transforms/Scalar/Reassociate.h b/llvm/include/llvm/Transforms/Scalar/Reassociate.h
index 7e47f8ae5d81e9..f3a2e0f4380eb0 100644
--- a/llvm/include/llvm/Transforms/Scalar/Reassociate.h
+++ b/llvm/include/llvm/Transforms/Scalar/Reassociate.h
@@ -110,9 +110,9 @@ class ReassociatePass : public PassInfoMixin<ReassociatePass> {
SmallVectorImpl<reassociate::ValueEntry> &Ops);
Value *OptimizeXor(Instruction *I,
SmallVectorImpl<reassociate::ValueEntry> &Ops);
- bool CombineXorOpnd(Instruction *I, reassociate::XorOpnd *Opnd1,
+ bool CombineXorOpnd(BasicBlock::iterator It, reassociate::XorOpnd *Opnd1,
APInt &ConstOpnd, Value *&Res);
- bool CombineXorOpnd(Instruction *I, reassociate::XorOpnd *Opnd1,
+ bool CombineXorOpnd(BasicBlock::iterator It, reassociate::XorOpnd *Opnd1,
reassociate::XorOpnd *Opnd2, APInt &ConstOpnd,
Value *&Res);
Value *buildMinimalMultiplyDAG(IRBuilderBase &Builder,
diff --git a/llvm/lib/Analysis/ValueTracking.cpp b/llvm/lib/Analysis/ValueTracking.cpp
index 69b2710420391d..8d33b1c37b198f 100644
--- a/llvm/lib/Analysis/ValueTracking.cpp
+++ b/llvm/lib/Analysis/ValueTracking.cpp
@@ -5536,10 +5536,10 @@ Value *llvm::isBytewiseValue(Value *V, const DataLayout &DL) {
// indices from Idxs that should be left out when inserting into the resulting
// struct. To is the result struct built so far, new insertvalue instructions
// build on that.
-static Value *BuildSubAggregate(Value *From, Value* To, Type *IndexedType,
+static Value *BuildSubAggregate(Value *From, Value *To, Type *IndexedType,
SmallVectorImpl<unsigned> &Idxs,
unsigned IdxSkip,
- Instruction *InsertBefore) {
+ BasicBlock::iterator InsertBefore) {
StructType *STy = dyn_cast<StructType>(IndexedType);
if (STy) {
// Save the original To argument so we can modify it
@@ -5596,8 +5596,7 @@ static Value *BuildSubAggregate(Value *From, Value* To, Type *IndexedType,
//
// All inserted insertvalue instructions are inserted before InsertBefore
static Value *BuildSubAggregate(Value *From, ArrayRef<unsigned> idx_range,
- Instruction *InsertBefore) {
- assert(InsertBefore && "Must have someplace to insert!");
+ BasicBlock::iterator InsertBefore) {
Type *IndexedType = ExtractValueInst::getIndexedType(From->getType(),
idx_range);
Value *To = PoisonValue::get(IndexedType);
@@ -5613,8 +5612,9 @@ static Value *BuildSubAggregate(Value *From, ArrayRef<unsigned> idx_range,
///
/// If InsertBefore is not null, this function will duplicate (modified)
/// insertvalues when a part of a nested struct is extracted.
-Value *llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
- Instruction *InsertBefore) {
+Value *
+llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
+ std::optional<BasicBlock::iterator> InsertBefore) {
// Nothing to index? Just return V then (this is useful at the end of our
// recursion).
if (idx_range.empty())
@@ -5653,7 +5653,7 @@ Value *llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
// which allows the unused 0,0 element from the nested struct to be
// removed.
return BuildSubAggregate(V, ArrayRef(idx_range.begin(), req_idx),
- InsertBefore);
+ *InsertBefore);
}
// This insert value inserts something else than what we are looking for.
@@ -5661,7 +5661,7 @@ Value *llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
// looking for, then.
if (*req_idx != *i)
return FindInsertedValue(I->getAggregateOperand(), idx_range,
- InsertBefore);
+ *InsertBefore);
}
// If we end up here, the indices of the insertvalue match with those
// requested (though possibly only partially). Now we recursively look at
diff --git a/llvm/lib/Transforms/Coroutines/CoroElide.cpp b/llvm/lib/Transforms/Coroutines/CoroElide.cpp
index 2f4083028ae054..d356a6d2e57594 100644
--- a/llvm/lib/Transforms/Coroutines/CoroElide.cpp
+++ b/llvm/lib/Transforms/Coroutines/CoroElide.cpp
@@ -141,8 +141,9 @@ static std::unique_ptr<raw_fd_ostream> getOrCreateLogFile() {
void Lowerer::elideHeapAllocations(Function *F, uint64_t FrameSize,
Align FrameAlign, AAResults &AA) {
LLVMContext &C = F->getContext();
- auto *InsertPt =
- getFirstNonAllocaInTheEntryBlock(CoroIds.front()->getFunction());
+ BasicBlock::iterator InsertPt =
+ getFirstNonAllocaInTheEntryBlock(CoroIds.front()->getFunction())
+ ->getIterator();
// Replacing llvm.coro.alloc with false will suppress dynamic
// allocation as it is expected for the frontend to generate the code that
diff --git a/llvm/lib/Transforms/Coroutines/CoroSplit.cpp b/llvm/lib/Transforms/Coroutines/CoroSplit.cpp
index 90d40242ff2e44..99675aa495f531 100644
--- a/llvm/lib/Transforms/Coroutines/CoroSplit.cpp
+++ b/llvm/lib/Transforms/Coroutines/CoroSplit.cpp
@@ -1423,7 +1423,7 @@ static bool simplifySuspendPoint(CoroSuspendInst *Suspend,
// No longer need a call to coro.resume or coro.destroy.
if (auto *Invoke = dyn_cast<InvokeInst>(CB)) {
- BranchInst::Create(Invoke->getNormalDest(), Invoke);
+ BranchInst::Create(Invoke->getNormalDest(), Invoke->getIterator());
}
// Grab the CalledValue from CB before erasing the CallInstr.
diff --git a/llvm/lib/Transforms/Coroutines/Coroutines.cpp b/llvm/lib/Transforms/Coroutines/Coroutines.cpp
index eef5543bae24ab..7bd151ed4dc1e3 100644
--- a/llvm/lib/Transforms/Coroutines/Coroutines.cpp
+++ b/llvm/lib/Transforms/Coroutines/Coroutines.cpp
@@ -55,7 +55,7 @@ Value *coro::LowererBase::makeSubFnCall(Value *Arg, int Index,
assert(Index >= CoroSubFnInst::IndexFirst &&
Index < CoroSubFnInst::IndexLast &&
"makeSubFnCall: Index value out of range");
- return CallInst::Create(Fn, {Arg, IndexVal}, "", InsertPt);
+ return CallInst::Create(Fn, {Arg, IndexVal}, "", InsertPt->getIterator());
}
// NOTE: Must be sorted!
@@ -157,8 +157,8 @@ static CoroSaveInst *createCoroSave(CoroBeginInst *CoroBegin,
CoroSuspendInst *SuspendInst) {
Module *M = SuspendInst->getModule();
auto *Fn = Intrinsic::getDeclaration(M, Intrinsic::coro_save);
- auto *SaveInst =
- cast<CoroSaveInst>(CallInst::Create(Fn, CoroBegin, "", SuspendInst));
+ auto *SaveInst = cast<CoroSaveInst>(
+ CallInst::Create(Fn, CoroBegin, "", SuspendInst->getIterator()));
assert(!SuspendInst->getCoroSave());
SuspendInst->setArgOperand(0, SaveInst);
return SaveInst;
@@ -362,7 +362,7 @@ void coro::Shape::buildFrom(Function &F) {
// calls, but that messes with our invariants. Re-insert the
// bitcast and ignore this type mismatch.
if (CastInst::isBitCastable(SrcTy, *RI)) {
- auto BCI = new BitCastInst(*SI, *RI, "", Suspend);
+ auto BCI = new BitCastInst(*SI, *RI, "", Suspend->getIterator());
SI->set(BCI);
continue;
}
diff --git a/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp b/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
index 8058282c422503..e89ec353487eef 100644
--- a/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
+++ b/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
@@ -266,9 +266,10 @@ doPromotion(Function *F, FunctionAnalysisManager &FAM,
CallBase *NewCS = nullptr;
if (InvokeInst *II = dyn_cast<InvokeInst>(&CB)) {
NewCS = InvokeInst::Create(NF, II->getNormalDest(), II->getUnwindDest(),
- Args, OpBundles, "", &CB);
+ Args, OpBundles, "", CB.getIterator());
} else {
- auto *NewCall = CallInst::Create(NF, Args, OpBundles, "", &CB);
+ auto *NewCall =
+ CallInst::Create(NF, Args, OpBundles, "", CB.getIterator());
NewCall->setTailCallKind(cast<CallInst>(&CB)->getTailCallKind());
NewCS = NewCall;
}
diff --git a/llvm/lib/Transforms/IPO/Attributor.cpp b/llvm/lib/Transforms/IPO/Attributor.cpp
index 72a2aadc204ba8..e3920b9e1d2baf 100644
--- a/llvm/lib/Transforms/IPO/Attributor.cpp
+++ b/llvm/lib/Transforms/IPO/Attributor.cpp
@@ -3123,12 +3123,12 @@ ChangeStatus Attributor::rewriteFunctionSignatures(
// Create a new call or invoke instruction to replace the old one.
CallBase *NewCB;
if (InvokeInst *II = dyn_cast<InvokeInst>(OldCB)) {
- NewCB =
- InvokeInst::Create(NewFn, II->getNormalDest(), II->getUnwindDest(),
- NewArgOperands, OperandBundleDefs, "", OldCB);
+ NewCB = InvokeInst::Create(NewFn, II->getNormalDest(),
+ II->getUnwindDest(), NewArgOperands,
+ OperandBundleDefs, "", OldCB->getIterator());
} else {
auto *NewCI = CallInst::Create(NewFn, NewArgOperands, OperandBundleDefs,
- "", OldCB);
+ "", OldCB->getIterator());
NewCI->setTailCallKind(cast<CallInst>(OldCB)->getTailCallKind());
NewCB = NewCI;
}
diff --git a/llvm/lib/Transforms/IPO/AttributorAttributes.cpp b/llvm/lib/Transforms/IPO/AttributorAttributes.cpp
index 585364dd7aa2e9..488a6f0bb153a4 100644
--- a/llvm/lib/Transforms/IPO/AttributorAttributes.cpp
+++ b/llvm/lib/Transforms/IPO/AttributorAttributes.cpp
@@ -6128,8 +6128,8 @@ struct AAValueSimplifyImpl : AAValueSimplify {
return TypedV;
if (CtxI && V.getType()->canLosslesslyBitCastTo(&Ty))
return Check ? &V
- : BitCastInst::CreatePointerBitCastOrAddrSpaceCast(&V, &Ty,
- "", CtxI);
+ : BitCastInst::CreatePointerBitCastOrAddrSpaceCast(
+ &V, &Ty, "", CtxI->getIterator());
return nullptr;
}
@@ -6731,8 +6731,9 @@ struct AAHeapToStackFunction final : public AAHeapToStack {
Size = SizeOffsetPair.Size;
}
- Instruction *IP =
- AI.MoveAllocaIntoEntry ? &F->getEntryBlock().front() : AI.CB;
+ BasicBlock::iterator IP = AI.MoveAllocaIntoEntry
+ ? F->getEntryBlock().begin()
+ : AI.CB->getIterator();
Align Alignment(1);
if (MaybeAlign RetAlign = AI.CB->getRetAlign())
@@ -6753,7 +6754,7 @@ struct AAHeapToStackFunction final : public AAHeapToStack {
if (Alloca->getType() != AI.CB->getType())
Alloca = BitCastInst::CreatePointerBitCastOrAddrSpaceCast(
- Alloca, AI.CB->getType(), "malloc_cast", AI.CB);
+ Alloca, AI.CB->getType(), "malloc_cast", AI.CB->getIterator());
auto *I8Ty = Type::getInt8Ty(F->getContext());
auto *InitVal = getInitialValueOfAllocation(AI.CB, TLI, I8Ty);
@@ -7450,10 +7451,10 @@ struct AAPrivatizablePtrArgument final : public AAPrivatizablePtrImpl {
/// The values needed are taken from the arguments of \p F starting at
/// position \p ArgNo.
static void createInitialization(Type *PrivType, Value &Base, Function &F,
- unsigned ArgNo, Instruction &IP) {
+ unsigned ArgNo, BasicBlock::iterator IP) {
assert(PrivType && "Expected privatizable type!");
- IRBuilder<NoFolder> IRB(&IP);
+ IRBuilder<NoFolder> IRB(IP->getParent(), IP);
const DataLayout &DL = F.getParent()->getDataLayout();
// Traverse the type, build GEPs and stores.
@@ -7462,17 +7463,17 @@ struct AAPrivatizablePtrArgument final : public AAPrivatizablePtrImpl {
for (unsigned u = 0, e = PrivStructType->getNumElements(); u < e; u++) {
Value *Ptr =
constructPointer(&Base, PrivStructLayout->getElementOffset(u), IRB);
- new StoreInst(F.getArg(ArgNo + u), Ptr, &IP);
+ new StoreInst(F.getArg(ArgNo + u), Ptr, IP);
}
} else if (auto *PrivArrayType = dyn_cast<ArrayType>(PrivType)) {
Type *PointeeTy = PrivArrayType->getElementType();
uint64_t PointeeTySize = DL.getTypeStoreSize(PointeeTy);
for (unsigned u = 0, e = PrivArrayType->getNumElements(); u < e; u++) {
Value *Ptr = constructPointer(&Base, u * PointeeTySize, IRB);
- new StoreInst(F.getArg(ArgNo + u), Ptr, &IP);
+ new StoreInst(F.getArg(ArgNo + u), Ptr, IP);
}
} else {
- new StoreInst(F.getArg(ArgNo), &Base, &IP);
+ new StoreInst(F.getArg(ArgNo), &Base, IP);
}
}
@@ -7495,7 +7496,7 @@ struct AAPrivatizablePtrArgument final : public AAPrivatizablePtrImpl {
Type *PointeeTy = PrivStructType->getElementType(u);
Value *Ptr =
constructPointer(Base, PrivStructLayout->getElementOffset(u), IRB);
- LoadInst *L = new LoadInst(PointeeTy, Ptr, "", IP);
+ LoadInst *L = new LoadInst(PointeeTy, Ptr, "", IP->getIterator());
L->setAlignment(Alignment);
ReplacementValues.push_back(L);
}
@@ -7504,12 +7505,12 @@ struct AAPrivatizablePtrArgument final : public AAPrivatizablePtrImpl {
uint64_t PointeeTySize = DL.getTypeStoreSize(PointeeTy);
for (unsigned u = 0, e = PrivArrayType->getNumElements(); u < e; u++) {
Value *Ptr = constructPointer(Base, u * PointeeTySize, IRB);
- LoadInst *L = new LoadInst(PointeeTy, Ptr, "", IP);
+ LoadInst *L = new LoadInst(PointeeTy, Ptr, "", IP->getIterator());
L->setAlignment(Alignment);
ReplacementValues.push_back(L);
}
} else {
- LoadInst *L = new LoadInst(PrivType, Base, "", IP);
+ LoadInst *L = new LoadInst(PrivType, Base, "", IP->getIterator());
L->setAlignment(Alignment);
ReplacementValues.push_back(L);
}
@@ -7549,13 +7550,13 @@ struct AAPrivatizablePtrArgument final : public AAPrivatizablePtrImpl {
[=](const Attributor::ArgumentReplacementInfo &ARI,
Function &ReplacementFn, Function::arg_iterator ArgIt) {
BasicBlock &EntryBB = ReplacementFn.getEntryBlock();
- Instruction *IP = &*EntryBB.getFirstInsertionPt();
+ BasicBlock::iterator IP = EntryBB.getFirstInsertionPt();
const DataLayout &DL = IP->getModule()->getDataLayout();
unsigned AS = DL.getAllocaAddrSpace();
Instruction *AI = new AllocaInst(*PrivatizableType, AS,
Arg->getName() + ".priv", IP);
createInitialization(*PrivatizableType, *AI, ReplacementFn,
- ArgIt->getArgNo(), *IP);
+ ArgIt->getArgNo(), IP);
if (AI->getType() != Arg->getType())
AI = BitCastInst::CreatePointerBitCastOrAddrSpaceCast(
@@ -12313,10 +12314,10 @@ struct AAIndirectCallInfoCallSite : public AAIndirectCallInfo {
Value *FP = CB->getCalledOperand();
if (FP->getType()->getPointerAddressSpace())
FP = new AddrSpaceCastInst(FP, PointerType::get(FP->getType(), 0),
- FP->getName() + ".as0", CB);
+ FP->getName() + ".as0", CB->getIterator());
bool CBIsVoid = CB->getType()->isVoidTy();
- Instruction *IP = CB;
+ BasicBlock::iterator IP = CB->getIterator();
FunctionType *CSFT = CB->getFunctionType();
SmallVector<Value *> CSArgs(CB->arg_begin(), CB->arg_end());
@@ -12336,8 +12337,9 @@ struct AAIndirectCallInfoCallSite : public AAIndirectCallInfo {
promoteCall(*CB, NewCallee, nullptr);
return ChangeStatus::CHANGED;
}
- Instruction *NewCall = CallInst::Create(FunctionCallee(CSFT, NewCallee),
- CSArgs, CB->getName(), CB);
+ Instruction *NewCall =
+ CallInst::Create(FunctionCallee(CSFT, NewCallee), CSArgs,
+ CB->getName(), CB->getIterator());
if (!CBIsVoid)
A.changeAfterManifest(IRPosition::callsite_returned(*CB), *NewCall);
A.deleteAfterManifest(*CB);
@@ -12372,11 +12374,11 @@ struct AAIndirectCallInfoCallSite : public AAIndirectCallInfo {
A.registerManifestAddedBasicBlock(*CBBB);
auto *SplitTI = cast<BranchInst>(LastCmp->getNextNode());
BasicBlock *ElseBB;
- if (IP == CB) {
+ if (&*IP == CB) {
ElseBB = BasicBlock::Create(ThenTI->getContext(), "",
ThenTI->getFunction(), CBBB);
A.registerManifestAddedBasicBlock(*ElseBB);
- IP = BranchInst::Create(CBBB, ElseBB);
+ IP = BranchInst::Create(CBBB, ElseBB)->getIterator();
SplitTI->replaceUsesOfWith(CBBB, ElseBB);
} else {
ElseBB = IP->getParent();
@@ -12390,7 +12392,7 @@ struct AAIndirectCallInfoCallSite : public AAIndirectCallInfo {
NewCall = &cast<CallInst>(promoteCall(*CBClone, NewCallee, &RetBC));
} else {
NewCall = CallInst::Create(FunctionCallee(CSFT, NewCallee), CSArgs,
- CB->getName(), ThenTI);
+ CB->getName(), ThenTI->getIterator());
}
NewCalls.push_back({NewCall, RetBC});
}
@@ -12416,7 +12418,7 @@ struct AAIndirectCallInfoCallSite : public AAIndirectCallInfo {
} else {
auto *CBClone = cast<CallInst>(CB->clone());
CBClone->setName(CB->getName());
- CBClone->insertBefore(IP);
+ CBClone->insertBefore(*IP->getParent(), IP);
NewCalls.push_back({CBClone, nullptr});
AttachCalleeMetadata(*CBClone);
}
@@ -12425,7 +12427,7 @@ struct AAIndirectCallInfoCallSite : public AAIndirectCallInfo {
if (!CBIsVoid) {
auto *PHI = PHINode::Create(CB->getType(), NewCalls.size(),
CB->getName() + ".phi",
- &*CB->getParent()->getFirstInsertionPt());
+ CB->getParent()->getFirstInsertionPt());
for (auto &It : NewCalls) {
CallBase *NewCall = It.first;
Instruction *CallRet = It.second ? It.second : It.first;
@@ -12783,9 +12785,11 @@ struct AAAllocationInfoImpl : public AAAllocationInfo {
auto *NumBytesToValue =
ConstantInt::get(I->getContext(), APInt(32, NumBytesToAllocate));
+ BasicBlock::iterator insertPt = AI->getIterator();
+ insertPt = std::next(insertPt);
AllocaInst *NewAllocaInst =
new AllocaInst(CharType, AI->getAddressSpace(), NumBytesToValue,
- AI->getAlign(), AI->getName(), AI->getNextNode());
+ AI->getAlign(), AI->getName(), insertPt);
if (A.changeAfterManifest(IRPosition::inst(*AI), *NewAllocaInst))
return ChangeStatus::CHANGED;
diff --git a/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp b/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
index 4f65748c19e60f..f19031383f5cb3 100644
--- a/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
+++ b/llvm/lib/Transforms/IPO/DeadArgumentElimination.cpp
@@ -204,9 +204,9 @@ bool DeadArgumentEliminationPass::deleteDeadVarargs(Function &F) {
CallBase *NewCB = nullptr;
if (InvokeInst *II = dyn_cast<InvokeInst>(CB)) {
NewCB = InvokeInst::Create(NF, II->getNormalDest(), II->getUnwindDest(),
- Args, OpBundles, "", CB);
+ Args, OpBundles, "", CB->getIterator());
} else {
- NewCB = CallInst::Create(NF, Args, OpBundles, "", CB);
+ NewCB = CallInst::Create(NF, Args, OpBundles, "", CB->getIterator());
cast<CallInst>(NewCB)->setTailCallKind(
cast<CallInst>(CB)->getTailCallKind());
}
@@ -946,7 +946,7 @@ bool DeadArgumentEliminationPass::removeDeadStuffFromFunction(Function *F) {
NewCB = InvokeInst::Create(NF, II->getNormalDest(), II->getUnwindDest(),
Args, OpBundles, "", CB.getParent());
} else {
- NewCB = CallInst::Create(NFTy, NF, Args, OpBundles, "", &CB);
+ NewCB = CallInst::Create(NFTy, NF, Args, OpBundles, "", CB.getIterator());
cast<CallInst>(NewCB)->setTailCallKind(
cast<CallInst>(&CB)->getTailCallKind());
}
@@ -1070,7 +1070,8 @@ bool DeadArgumentEliminationPass::removeDeadStuffFromFunction(Function *F) {
}
// Replace the return instruction with one returning the new return
// value (possibly 0 if we became void).
- auto *NewRet = ReturnInst::Create(F->getContext(), RetVal, RI);
+ auto *NewRet =
+ ReturnInst::Create(F->getContext(), RetVal, RI->getIterator());
NewRet->setDebugLoc(RI->getDebugLoc());
RI->eraseFromParent();
}
diff --git a/llvm/lib/Transforms/IPO/GlobalOpt.cpp b/llvm/lib/Transforms/IPO/GlobalOpt.cpp
index c92b5d82fc85a4..da714c9a75701b 100644
--- a/llvm/lib/Transforms/IPO/GlobalOpt.cpp
+++ b/llvm/lib/Transforms/IPO/GlobalOpt.cpp
@@ -953,7 +953,7 @@ OptimizeGlobalAddressOfAllocation(GlobalVariable *GV, CallInst *CI,
GV->getContext(),
!isa<ConstantPointerNull>(SI->getValueOperand())),
InitBool, false, Align(1), SI->getOrdering(),
- SI->getSyncScopeID(), SI);
+ SI->getSyncScopeID(), SI->getIterator());
SI->eraseFromParent();
continue;
}
@@ -970,7 +970,8 @@ OptimizeGlobalAddressOfAllocation(GlobalVariable *GV, CallInst *CI,
// Replace the cmp X, 0 with a use of the bool value.
Value *LV = new LoadInst(InitBool->getValueType(), InitBool,
InitBool->getName() + ".val", false, Align(1),
- LI->getOrdering(), LI->getSyncScopeID(), LI);
+ LI->getOrdering(), LI->getSyncScopeID(),
+ LI->getIterator());
InitBoolUsed = true;
switch (ICI->getPredicate()) {
default: llvm_unreachable("Unknown ICmp Predicate!");
@@ -982,7 +983,7 @@ OptimizeGlobalAddressOfAllocation(GlobalVariable *GV, CallInst *CI,
break;
case ICmpInst::ICMP_ULE:
case ICmpInst::ICMP_EQ:
- LV = BinaryOperator::CreateNot(LV, "notinit", ICI);
+ LV = BinaryOperator::CreateNot(LV, "notinit", ICI->getIterator());
break;
case ICmpInst::ICMP_NE:
case ICmpInst::ICMP_UGT:
@@ -1260,9 +1261,10 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
if (LoadInst *LI = dyn_cast<LoadInst>(StoredVal)) {
assert(LI->getOperand(0) == GV && "Not a copy!");
// Insert a new load, to preserve the saved value.
- StoreVal = new LoadInst(NewGV->getValueType(), NewGV,
- LI->getName() + ".b", false, Align(1),
- LI->getOrdering(), LI->getSyncScopeID(), LI);
+ StoreVal =
+ new LoadInst(NewGV->getValueType(), NewGV, LI->getName() + ".b",
+ false, Align(1), LI->getOrdering(),
+ LI->getSyncScopeID(), LI->getIterator());
} else {
assert((isa<CastInst>(StoredVal) || isa<SelectInst>(StoredVal)) &&
"This is not a form that we understand!");
@@ -1272,19 +1274,19 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
}
StoreInst *NSI =
new StoreInst(StoreVal, NewGV, false, Align(1), SI->getOrdering(),
- SI->getSyncScopeID(), SI);
+ SI->getSyncScopeID(), SI->getIterator());
NSI->setDebugLoc(SI->getDebugLoc());
} else {
// Change the load into a load of bool then a select.
LoadInst *LI = cast<LoadInst>(UI);
- LoadInst *NLI = new LoadInst(NewGV->getValueType(), NewGV,
- LI->getName() + ".b", false, Align(1),
- LI->getOrdering(), LI->getSyncScopeID(), LI);
+ LoadInst *NLI = new LoadInst(
+ NewGV->getValueType(), NewGV, LI->getName() + ".b", false, Align(1),
+ LI->getOrdering(), LI->getSyncScopeID(), LI->getIterator());
Instruction *NSI;
if (IsOneZero)
- NSI = new ZExtInst(NLI, LI->getType(), "", LI);
+ NSI = new ZExtInst(NLI, LI->getType(), "", LI->getIterator());
else
- NSI = SelectInst::Create(NLI, OtherVal, InitVal, "", LI);
+ NSI = SelectInst::Create(NLI, OtherVal, InitVal, "", LI->getIterator());
NSI->takeName(LI);
// Since LI is split into two instructions, NLI and NSI both inherit the
// same DebugLoc
@@ -1462,14 +1464,14 @@ processInternalGlobal(GlobalVariable *GV, const GlobalStatus &GS,
const DataLayout &DL = GV->getParent()->getDataLayout();
LLVM_DEBUG(dbgs() << "LOCALIZING GLOBAL: " << *GV << "\n");
- Instruction &FirstI = const_cast<Instruction&>(*GS.AccessingFunction
- ->getEntryBlock().begin());
+ BasicBlock::iterator FirstI =
+ GS.AccessingFunction->getEntryBlock().begin().getNonConst();
Type *ElemTy = GV->getValueType();
// FIXME: Pass Global's alignment when globals have alignment
- AllocaInst *Alloca = new AllocaInst(ElemTy, DL.getAllocaAddrSpace(), nullptr,
- GV->getName(), &FirstI);
+ AllocaInst *Alloca = new AllocaInst(ElemTy, DL.getAllocaAddrSpace(),
+ nullptr, GV->getName(), FirstI);
if (!isa<UndefValue>(GV->getInitializer()))
- new StoreInst(GV->getInitializer(), Alloca, &FirstI);
+ new StoreInst(GV->getInitializer(), Alloca, FirstI);
GV->replaceAllUsesWith(Alloca);
GV->eraseFromParent();
@@ -1859,7 +1861,7 @@ static void RemovePreallocated(Function *F) {
assert((isa<CallInst>(CB) || isa<InvokeInst>(CB)) &&
"Unknown indirect call type");
- CallBase *NewCB = CallBase::Create(CB, OpBundles, CB);
+ CallBase *NewCB = CallBase::Create(CB, OpBundles, CB->getIterator());
CB->replaceAllUsesWith(NewCB);
NewCB->takeName(CB);
CB->eraseFromParent();
diff --git a/llvm/lib/Transforms/IPO/IROutliner.cpp b/llvm/lib/Transforms/IPO/IROutliner.cpp
index 48470bc71ff38a..03d4d503b80a7f 100644
--- a/llvm/lib/Transforms/IPO/IROutliner.cpp
+++ b/llvm/lib/Transforms/IPO/IROutliner.cpp
@@ -1501,7 +1501,7 @@ CallInst *replaceCalledFunction(Module &M, OutlinableRegion &Region) {
<< *AggFunc << " with new set of arguments\n");
// Create the new call instruction and erase the old one.
Call = CallInst::Create(AggFunc->getFunctionType(), AggFunc, NewCallArgs, "",
- Call);
+ Call->getIterator());
// It is possible that the call to the outlined function is either the first
// instruction is in the new block, the last instruction, or both. If either
diff --git a/llvm/lib/Transforms/IPO/OpenMPOpt.cpp b/llvm/lib/Transforms/IPO/OpenMPOpt.cpp
index 77ca36d64029f0..eea9399127e8ee 100644
--- a/llvm/lib/Transforms/IPO/OpenMPOpt.cpp
+++ b/llvm/lib/Transforms/IPO/OpenMPOpt.cpp
@@ -1146,17 +1146,18 @@ struct OpenMPOpt {
const DataLayout &DL = M.getDataLayout();
AllocaInst *AllocaI = new AllocaInst(
I.getType(), DL.getAllocaAddrSpace(), nullptr,
- I.getName() + ".seq.output.alloc", &OuterFn->front().front());
+ I.getName() + ".seq.output.alloc", OuterFn->front().begin());
// Emit a store instruction in the sequential BB to update the
// value.
- new StoreInst(&I, AllocaI, SeqStartBB->getTerminator());
+ new StoreInst(&I, AllocaI, SeqStartBB->getTerminator()->getIterator());
// Emit a load instruction and replace the use of the output value
// with it.
for (Instruction *UsrI : OutsideUsers) {
- LoadInst *LoadI = new LoadInst(
- I.getType(), AllocaI, I.getName() + ".seq.output.load", UsrI);
+ LoadInst *LoadI = new LoadInst(I.getType(), AllocaI,
+ I.getName() + ".seq.output.load",
+ UsrI->getIterator());
UsrI->replaceUsesOfWith(&I, LoadI);
}
}
@@ -1261,7 +1262,8 @@ struct OpenMPOpt {
++U)
Args.push_back(CI->getArgOperand(U));
- CallInst *NewCI = CallInst::Create(FT, Callee, Args, "", CI);
+ CallInst *NewCI =
+ CallInst::Create(FT, Callee, Args, "", CI->getIterator());
if (CI->getDebugLoc())
NewCI->setDebugLoc(CI->getDebugLoc());
@@ -1739,8 +1741,8 @@ struct OpenMPOpt {
Args.push_back(Arg.get());
Args.push_back(Handle);
- CallInst *IssueCallsite =
- CallInst::Create(IssueDecl, Args, /*NameStr=*/"", &RuntimeCall);
+ CallInst *IssueCallsite = CallInst::Create(IssueDecl, Args, /*NameStr=*/"",
+ RuntimeCall.getIterator());
OMPInfoCache.setCallingConvention(IssueDecl, IssueCallsite);
RuntimeCall.eraseFromParent();
@@ -1755,7 +1757,7 @@ struct OpenMPOpt {
Handle // handle to wait on.
};
CallInst *WaitCallsite = CallInst::Create(
- WaitDecl, WaitParams, /*NameStr=*/"", &WaitMovementPoint);
+ WaitDecl, WaitParams, /*NameStr=*/"", WaitMovementPoint.getIterator());
OMPInfoCache.setCallingConvention(WaitDecl, WaitCallsite);
return true;
@@ -4025,11 +4027,12 @@ struct AAKernelInfoFunction : AAKernelInfo {
static_cast<unsigned>(AddressSpace::Shared));
// Emit a store instruction to update the value.
- new StoreInst(&I, SharedMem, RegionEndBB->getTerminator());
+ new StoreInst(&I, SharedMem,
+ RegionEndBB->getTerminator()->getIterator());
- LoadInst *LoadI = new LoadInst(I.getType(), SharedMem,
- I.getName() + ".guarded.output.load",
- RegionBarrierBB->getTerminator());
+ LoadInst *LoadI = new LoadInst(
+ I.getType(), SharedMem, I.getName() + ".guarded.output.load",
+ RegionBarrierBB->getTerminator()->getIterator());
// Emit a load instruction and replace uses of the output value.
for (Use *U : OutsideUses)
@@ -4082,8 +4085,9 @@ struct AAKernelInfoFunction : AAKernelInfo {
// Second barrier ensures workers have read broadcast values.
if (HasBroadcastValues) {
- CallInst *Barrier = CallInst::Create(BarrierFn, {Ident, Tid}, "",
- RegionBarrierBB->getTerminator());
+ CallInst *Barrier =
+ CallInst::Create(BarrierFn, {Ident, Tid}, "",
+ RegionBarrierBB->getTerminator()->getIterator());
Barrier->setDebugLoc(DL);
OMPInfoCache.setCallingConvention(BarrierFn, Barrier);
}
@@ -4488,7 +4492,7 @@ struct AAKernelInfoFunction : AAKernelInfo {
Type *VoidPtrTy = PointerType::getUnqual(Ctx);
Instruction *WorkFnAI =
new AllocaInst(VoidPtrTy, DL.getAllocaAddrSpace(), nullptr,
- "worker.work_fn.addr", &Kernel->getEntryBlock().front());
+ "worker.work_fn.addr", Kernel->getEntryBlock().begin());
WorkFnAI->setDebugLoc(DLoc);
OMPInfoCache.OMPBuilder.updateToLocation(
diff --git a/llvm/lib/Transforms/IPO/WholeProgramDevirt.cpp b/llvm/lib/Transforms/IPO/WholeProgramDevirt.cpp
index 75f7de4290a742..cf13c91a867704 100644
--- a/llvm/lib/Transforms/IPO/WholeProgramDevirt.cpp
+++ b/llvm/lib/Transforms/IPO/WholeProgramDevirt.cpp
@@ -434,7 +434,7 @@ struct VirtualCallSite {
emitRemark(OptName, TargetName, OREGetter);
CB.replaceAllUsesWith(New);
if (auto *II = dyn_cast<InvokeInst>(&CB)) {
- BranchInst::Create(II->getNormalDest(), &CB);
+ BranchInst::Create(II->getNormalDest(), CB.getIterator());
II->getUnwindDest()->removePredecessor(II->getParent());
}
CB.eraseFromParent();
@@ -861,7 +861,7 @@ void llvm::updatePublicTypeTestCalls(Module &M,
auto *CI = cast<CallInst>(U.getUser());
auto *NewCI = CallInst::Create(
TypeTestFunc, {CI->getArgOperand(0), CI->getArgOperand(1)},
- std::nullopt, "", CI);
+ std::nullopt, "", CI->getIterator());
CI->replaceAllUsesWith(NewCI);
CI->eraseFromParent();
}
@@ -1225,8 +1225,8 @@ void DevirtModule::applySingleImplDevirt(VTableSlotInfo &SlotInfo,
CB.setMetadata(LLVMContext::MD_callees, nullptr);
if (CB.getCalledOperand() &&
CB.getOperandBundle(LLVMContext::OB_ptrauth)) {
- auto *NewCS =
- CallBase::removeOperandBundle(&CB, LLVMContext::OB_ptrauth, &CB);
+ auto *NewCS = CallBase::removeOperandBundle(
+ &CB, LLVMContext::OB_ptrauth, CB.getIterator());
CB.replaceAllUsesWith(NewCS);
// Schedule for deletion at the end of pass run.
CallsWithPtrAuthBundleRemoved.push_back(&CB);
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
index 89ca40f1c20d53..d2756b0d4d54fb 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
@@ -3140,28 +3140,30 @@ Instruction *InstCombinerImpl::visitCallInst(CallInst &CI) {
if (match(BO0, m_VecReverse(m_Value(X)))) {
// rev(binop rev(X), rev(Y)) --> binop X, Y
if (match(BO1, m_VecReverse(m_Value(Y))))
- return replaceInstUsesWith(CI,
- BinaryOperator::CreateWithCopiedFlags(
- OldBinOp->getOpcode(), X, Y, OldBinOp,
- OldBinOp->getName(), II));
+ return replaceInstUsesWith(CI, BinaryOperator::CreateWithCopiedFlags(
+ OldBinOp->getOpcode(), X, Y,
+ OldBinOp, OldBinOp->getName(),
+ II->getIterator()));
// rev(binop rev(X), BO1Splat) --> binop X, BO1Splat
if (isSplatValue(BO1))
- return replaceInstUsesWith(CI,
- BinaryOperator::CreateWithCopiedFlags(
- OldBinOp->getOpcode(), X, BO1,
- OldBinOp, OldBinOp->getName(), II));
+ return replaceInstUsesWith(CI, BinaryOperator::CreateWithCopiedFlags(
+ OldBinOp->getOpcode(), X, BO1,
+ OldBinOp, OldBinOp->getName(),
+ II->getIterator()));
}
// rev(binop BO0Splat, rev(Y)) --> binop BO0Splat, Y
if (match(BO1, m_VecReverse(m_Value(Y))) && isSplatValue(BO0))
- return replaceInstUsesWith(CI, BinaryOperator::CreateWithCopiedFlags(
- OldBinOp->getOpcode(), BO0, Y,
- OldBinOp, OldBinOp->getName(), II));
+ return replaceInstUsesWith(CI,
+ BinaryOperator::CreateWithCopiedFlags(
+ OldBinOp->getOpcode(), BO0, Y, OldBinOp,
+ OldBinOp->getName(), II->getIterator()));
}
// rev(unop rev(X)) --> unop X
if (match(Vec, m_OneUse(m_UnOp(m_VecReverse(m_Value(X)))))) {
auto *OldUnOp = cast<UnaryOperator>(Vec);
auto *NewUnOp = UnaryOperator::CreateWithCopiedFlags(
- OldUnOp->getOpcode(), X, OldUnOp, OldUnOp->getName(), II);
+ OldUnOp->getOpcode(), X, OldUnOp, OldUnOp->getName(),
+ II->getIterator());
return replaceInstUsesWith(CI, NewUnOp);
}
break;
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index 1cebab8203ea54..fc2688f425bb8f 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -556,8 +556,9 @@ static Value *rewriteGEPAsOffset(Value *Start, Value *Base,
// Create empty phi nodes. This avoids cyclic dependencies when creating
// the remaining instructions.
if (auto *PHI = dyn_cast<PHINode>(Val))
- NewInsts[PHI] = PHINode::Create(IndexType, PHI->getNumIncomingValues(),
- PHI->getName() + ".idx", PHI);
+ NewInsts[PHI] =
+ PHINode::Create(IndexType, PHI->getNumIncomingValues(),
+ PHI->getName() + ".idx", PHI->getIterator());
}
IRBuilder<> Builder(Base->getContext());
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp b/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
index a222889842f54f..c70872c1291738 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
@@ -374,7 +374,7 @@ void PointerReplacer::replace(Instruction *I) {
} else if (auto *PHI = dyn_cast<PHINode>(I)) {
Type *NewTy = getReplacement(PHI->getIncomingValue(0))->getType();
auto *NewPHI = PHINode::Create(NewTy, PHI->getNumIncomingValues(),
- PHI->getName(), PHI);
+ PHI->getName(), PHI->getIterator());
for (unsigned int I = 0; I < PHI->getNumIncomingValues(); ++I)
NewPHI->addIncoming(getReplacement(PHI->getIncomingValue(I)),
PHI->getIncomingBlock(I));
diff --git a/llvm/lib/Transforms/InstCombine/InstCombinePHI.cpp b/llvm/lib/Transforms/InstCombine/InstCombinePHI.cpp
index 192ccbbcb7c7bc..46bca4b722a03a 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombinePHI.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombinePHI.cpp
@@ -1205,7 +1205,8 @@ Instruction *InstCombinerImpl::SliceUpIllegalIntegerPHI(PHINode &FirstPhi) {
// Otherwise, Create the new PHI node for this user.
EltPHI = PHINode::Create(Ty, PN->getNumIncomingValues(),
- PN->getName()+".off"+Twine(Offset), PN);
+ PN->getName() + ".off" + Twine(Offset),
+ PN->getIterator());
assert(EltPHI->getType() != PN->getType() &&
"Truncate didn't shrink phi?");
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp b/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
index 18ab510aae7f21..3c4c0f35eb6d48 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
@@ -1263,7 +1263,8 @@ static Instruction *foldInsSequenceIntoSplat(InsertElementInst &InsElt) {
PoisonValue *PoisonVec = PoisonValue::get(VecTy);
Constant *Zero = ConstantInt::get(Int64Ty, 0);
if (!cast<ConstantInt>(FirstIE->getOperand(2))->isZero())
- FirstIE = InsertElementInst::Create(PoisonVec, SplatVal, Zero, "", &InsElt);
+ FirstIE = InsertElementInst::Create(PoisonVec, SplatVal, Zero, "",
+ InsElt.getIterator());
// Splat from element 0, but replace absent elements with poison in the mask.
SmallVector<int, 16> Mask(NumElements, 0);
diff --git a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
index 5d5c4ea57ed56c..f22f53b8cd8fc6 100644
--- a/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp
@@ -1879,7 +1879,7 @@ void ModuleAddressSanitizer::poisonOneInitializer(Function &GlobalInit,
// Add calls to unpoison all globals before each return instruction.
for (auto &BB : GlobalInit)
if (ReturnInst *RI = dyn_cast<ReturnInst>(BB.getTerminator()))
- CallInst::Create(AsanUnpoisonGlobals, "", RI);
+ CallInst::Create(AsanUnpoisonGlobals, "", RI->getIterator());
}
void ModuleAddressSanitizer::createInitializerPoisonCalls(
diff --git a/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
index 2ba127bba6f682..9a2ec7618c1a49 100644
--- a/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
@@ -516,10 +516,12 @@ class DataFlowSanitizer {
const MemoryMapParams *MapParams;
Value *getShadowOffset(Value *Addr, IRBuilder<> &IRB);
- Value *getShadowAddress(Value *Addr, Instruction *Pos);
- Value *getShadowAddress(Value *Addr, Instruction *Pos, Value *ShadowOffset);
- std::pair<Value *, Value *>
- getShadowOriginAddress(Value *Addr, Align InstAlignment, Instruction *Pos);
+ Value *getShadowAddress(Value *Addr, BasicBlock::iterator Pos);
+ Value *getShadowAddress(Value *Addr, BasicBlock::iterator Pos,
+ Value *ShadowOffset);
+ std::pair<Value *, Value *> getShadowOriginAddress(Value *Addr,
+ Align InstAlignment,
+ BasicBlock::iterator Pos);
bool isInstrumented(const Function *F);
bool isInstrumented(const GlobalAlias *GA);
bool isForceZeroLabels(const Function *F);
@@ -536,7 +538,7 @@ class DataFlowSanitizer {
/// Advances \p OriginAddr to point to the next 32-bit origin and then loads
/// from it. Returns the origin's loaded value.
- Value *loadNextOrigin(Instruction *Pos, Align OriginAlign,
+ Value *loadNextOrigin(BasicBlock::iterator Pos, Align OriginAlign,
Value **OriginAddr);
/// Returns whether the given load byte size is amenable to inlined
@@ -647,18 +649,18 @@ struct DFSanFunction {
/// When Zero is nullptr, it uses ZeroPrimitiveShadow. Otherwise it can be
/// zeros with other bitwidths.
Value *combineOrigins(const std::vector<Value *> &Shadows,
- const std::vector<Value *> &Origins, Instruction *Pos,
- ConstantInt *Zero = nullptr);
+ const std::vector<Value *> &Origins,
+ BasicBlock::iterator Pos, ConstantInt *Zero = nullptr);
Value *getShadow(Value *V);
void setShadow(Instruction *I, Value *Shadow);
/// Generates IR to compute the union of the two given shadows, inserting it
/// before Pos. The combined value is with primitive type.
- Value *combineShadows(Value *V1, Value *V2, Instruction *Pos);
+ Value *combineShadows(Value *V1, Value *V2, BasicBlock::iterator Pos);
/// Combines the shadow values of V1 and V2, then converts the combined value
/// with primitive type into a shadow value with the original type T.
Value *combineShadowsThenConvert(Type *T, Value *V1, Value *V2,
- Instruction *Pos);
+ BasicBlock::iterator Pos);
Value *combineOperandShadows(Instruction *Inst);
/// Generates IR to load shadow and origin corresponding to bytes [\p
@@ -670,11 +672,11 @@ struct DFSanFunction {
/// current stack if the returned shadow is tainted.
std::pair<Value *, Value *> loadShadowOrigin(Value *Addr, uint64_t Size,
Align InstAlignment,
- Instruction *Pos);
+ BasicBlock::iterator Pos);
void storePrimitiveShadowOrigin(Value *Addr, uint64_t Size,
Align InstAlignment, Value *PrimitiveShadow,
- Value *Origin, Instruction *Pos);
+ Value *Origin, BasicBlock::iterator Pos);
/// Applies PrimitiveShadow to all primitive subtypes of T, returning
/// the expanded shadow value.
///
@@ -682,7 +684,7 @@ struct DFSanFunction {
/// EFP([n x T], PS) = [n x EFP(T,PS)]
/// EFP(other types, PS) = PS
Value *expandFromPrimitiveShadow(Type *T, Value *PrimitiveShadow,
- Instruction *Pos);
+ BasicBlock::iterator Pos);
/// Collapses Shadow into a single primitive shadow value, unioning all
/// primitive shadow values in the process. Returns the final primitive
/// shadow value.
@@ -690,10 +692,10 @@ struct DFSanFunction {
/// CTP({V1,V2, ...}) = UNION(CFP(V1,PS),CFP(V2,PS),...)
/// CTP([V1,V2,...]) = UNION(CFP(V1,PS),CFP(V2,PS),...)
/// CTP(other types, PS) = PS
- Value *collapseToPrimitiveShadow(Value *Shadow, Instruction *Pos);
+ Value *collapseToPrimitiveShadow(Value *Shadow, BasicBlock::iterator Pos);
void storeZeroPrimitiveShadow(Value *Addr, uint64_t Size, Align ShadowAlign,
- Instruction *Pos);
+ BasicBlock::iterator Pos);
Align getShadowAlign(Align InstAlignment);
@@ -724,7 +726,7 @@ struct DFSanFunction {
std::pair<Value *, Value *>
loadShadowFast(Value *ShadowAddr, Value *OriginAddr, uint64_t Size,
Align ShadowAlign, Align OriginAlign, Value *FirstOrigin,
- Instruction *Pos);
+ BasicBlock::iterator Pos);
Align getOriginAlign(Align InstAlignment);
@@ -760,8 +762,9 @@ struct DFSanFunction {
/// for untainted sinks.
/// * Use __dfsan_maybe_store_origin if there are too many origin store
/// instrumentations.
- void storeOrigin(Instruction *Pos, Value *Addr, uint64_t Size, Value *Shadow,
- Value *Origin, Value *StoreOriginAddr, Align InstAlignment);
+ void storeOrigin(BasicBlock::iterator Pos, Value *Addr, uint64_t Size,
+ Value *Shadow, Value *Origin, Value *StoreOriginAddr,
+ Align InstAlignment);
/// Convert a scalar value to an i1 by comparing with 0.
Value *convertToBool(Value *V, IRBuilder<> &IRB, const Twine &Name = "");
@@ -774,7 +777,8 @@ struct DFSanFunction {
/// shadow always has primitive type.
std::pair<Value *, Value *>
loadShadowOriginSansLoadTracking(Value *Addr, uint64_t Size,
- Align InstAlignment, Instruction *Pos);
+ Align InstAlignment,
+ BasicBlock::iterator Pos);
int NumOriginStores = 0;
};
@@ -975,7 +979,7 @@ bool DFSanFunction::shouldInstrumentWithCall() {
}
Value *DFSanFunction::expandFromPrimitiveShadow(Type *T, Value *PrimitiveShadow,
- Instruction *Pos) {
+ BasicBlock::iterator Pos) {
Type *ShadowTy = DFS.getShadowTy(T);
if (!isa<ArrayType>(ShadowTy) && !isa<StructType>(ShadowTy))
@@ -984,7 +988,7 @@ Value *DFSanFunction::expandFromPrimitiveShadow(Type *T, Value *PrimitiveShadow,
if (DFS.isZeroShadow(PrimitiveShadow))
return DFS.getZeroShadow(ShadowTy);
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
SmallVector<unsigned, 4> Indices;
Value *Shadow = UndefValue::get(ShadowTy);
Shadow = expandFromPrimitiveShadowRecursive(Shadow, Indices, ShadowTy,
@@ -1025,7 +1029,7 @@ Value *DFSanFunction::collapseToPrimitiveShadow(Value *Shadow,
}
Value *DFSanFunction::collapseToPrimitiveShadow(Value *Shadow,
- Instruction *Pos) {
+ BasicBlock::iterator Pos) {
Type *ShadowTy = Shadow->getType();
if (!isa<ArrayType>(ShadowTy) && !isa<StructType>(ShadowTy))
return Shadow;
@@ -1035,7 +1039,7 @@ Value *DFSanFunction::collapseToPrimitiveShadow(Value *Shadow,
if (CS && DT.dominates(CS, Pos))
return CS;
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *PrimitiveShadow = collapseToPrimitiveShadow(Shadow, IRB);
// Caches the converted primitive shadow value.
CS = PrimitiveShadow;
@@ -1760,14 +1764,14 @@ bool DataFlowSanitizer::runImpl(
// instrumentation.
if (ClDebugNonzeroLabels) {
for (Value *V : DFSF.NonZeroChecks) {
- Instruction *Pos;
+ BasicBlock::iterator Pos;
if (Instruction *I = dyn_cast<Instruction>(V))
- Pos = I->getNextNode();
+ Pos = std::next(I->getIterator());
else
- Pos = &DFSF.F->getEntryBlock().front();
+ Pos = DFSF.F->getEntryBlock().begin();
while (isa<PHINode>(Pos) || isa<AllocaInst>(Pos))
- Pos = Pos->getNextNode();
- IRBuilder<> IRB(Pos);
+ Pos = std::next(Pos->getIterator());
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *PrimitiveShadow = DFSF.collapseToPrimitiveShadow(V, Pos);
Value *Ne =
IRB.CreateICmpNE(PrimitiveShadow, DFSF.DFS.ZeroPrimitiveShadow);
@@ -1912,9 +1916,9 @@ Value *DataFlowSanitizer::getShadowOffset(Value *Addr, IRBuilder<> &IRB) {
std::pair<Value *, Value *>
DataFlowSanitizer::getShadowOriginAddress(Value *Addr, Align InstAlignment,
- Instruction *Pos) {
+ BasicBlock::iterator Pos) {
// Returns ((Addr & shadow_mask) + origin_base - shadow_base) & ~4UL
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *ShadowOffset = getShadowOffset(Addr, IRB);
Value *ShadowLong = ShadowOffset;
uint64_t ShadowBase = MapParams->ShadowBase;
@@ -1944,27 +1948,30 @@ DataFlowSanitizer::getShadowOriginAddress(Value *Addr, Align InstAlignment,
return std::make_pair(ShadowPtr, OriginPtr);
}
-Value *DataFlowSanitizer::getShadowAddress(Value *Addr, Instruction *Pos,
+Value *DataFlowSanitizer::getShadowAddress(Value *Addr,
+ BasicBlock::iterator Pos,
Value *ShadowOffset) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
return IRB.CreateIntToPtr(ShadowOffset, PrimitiveShadowPtrTy);
}
-Value *DataFlowSanitizer::getShadowAddress(Value *Addr, Instruction *Pos) {
- IRBuilder<> IRB(Pos);
+Value *DataFlowSanitizer::getShadowAddress(Value *Addr,
+ BasicBlock::iterator Pos) {
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *ShadowOffset = getShadowOffset(Addr, IRB);
return getShadowAddress(Addr, Pos, ShadowOffset);
}
Value *DFSanFunction::combineShadowsThenConvert(Type *T, Value *V1, Value *V2,
- Instruction *Pos) {
+ BasicBlock::iterator Pos) {
Value *PrimitiveValue = combineShadows(V1, V2, Pos);
return expandFromPrimitiveShadow(T, PrimitiveValue, Pos);
}
// Generates IR to compute the union of the two given shadows, inserting it
// before Pos. The combined value is with primitive type.
-Value *DFSanFunction::combineShadows(Value *V1, Value *V2, Instruction *Pos) {
+Value *DFSanFunction::combineShadows(Value *V1, Value *V2,
+ BasicBlock::iterator Pos) {
if (DFS.isZeroShadow(V1))
return collapseToPrimitiveShadow(V2, Pos);
if (DFS.isZeroShadow(V2))
@@ -2002,7 +2009,7 @@ Value *DFSanFunction::combineShadows(Value *V1, Value *V2, Instruction *Pos) {
Value *PV1 = collapseToPrimitiveShadow(V1, Pos);
Value *PV2 = collapseToPrimitiveShadow(V2, Pos);
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
CCS.Block = Pos->getParent();
CCS.Shadow = IRB.CreateOr(PV1, PV2);
@@ -2031,9 +2038,11 @@ Value *DFSanFunction::combineOperandShadows(Instruction *Inst) {
Value *Shadow = getShadow(Inst->getOperand(0));
for (unsigned I = 1, N = Inst->getNumOperands(); I < N; ++I)
- Shadow = combineShadows(Shadow, getShadow(Inst->getOperand(I)), Inst);
+ Shadow = combineShadows(Shadow, getShadow(Inst->getOperand(I)),
+ Inst->getIterator());
- return expandFromPrimitiveShadow(Inst->getType(), Shadow, Inst);
+ return expandFromPrimitiveShadow(Inst->getType(), Shadow,
+ Inst->getIterator());
}
void DFSanVisitor::visitInstOperands(Instruction &I) {
@@ -2044,7 +2053,8 @@ void DFSanVisitor::visitInstOperands(Instruction &I) {
Value *DFSanFunction::combineOrigins(const std::vector<Value *> &Shadows,
const std::vector<Value *> &Origins,
- Instruction *Pos, ConstantInt *Zero) {
+ BasicBlock::iterator Pos,
+ ConstantInt *Zero) {
assert(Shadows.size() == Origins.size());
size_t Size = Origins.size();
if (Size == 0)
@@ -2063,7 +2073,7 @@ Value *DFSanFunction::combineOrigins(const std::vector<Value *> &Shadows,
}
Value *OpShadow = Shadows[I];
Value *PrimitiveShadow = collapseToPrimitiveShadow(OpShadow, Pos);
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *Cond = IRB.CreateICmpNE(PrimitiveShadow, Zero);
Origin = IRB.CreateSelect(Cond, OpOrigin, Origin);
}
@@ -2078,7 +2088,7 @@ Value *DFSanFunction::combineOperandOrigins(Instruction *Inst) {
Shadows[I] = getShadow(Inst->getOperand(I));
Origins[I] = getOrigin(Inst->getOperand(I));
}
- return combineOrigins(Shadows, Origins, Inst);
+ return combineOrigins(Shadows, Origins, Inst->getIterator());
}
void DFSanVisitor::visitInstOperandOrigins(Instruction &I) {
@@ -2129,9 +2139,10 @@ bool DFSanFunction::useCallbackLoadLabelAndOrigin(uint64_t Size,
return Alignment < MinOriginAlignment || !DFS.hasLoadSizeForFastPath(Size);
}
-Value *DataFlowSanitizer::loadNextOrigin(Instruction *Pos, Align OriginAlign,
+Value *DataFlowSanitizer::loadNextOrigin(BasicBlock::iterator Pos,
+ Align OriginAlign,
Value **OriginAddr) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
*OriginAddr =
IRB.CreateGEP(OriginTy, *OriginAddr, ConstantInt::get(IntptrTy, 1));
return IRB.CreateAlignedLoad(OriginTy, *OriginAddr, OriginAlign);
@@ -2139,7 +2150,7 @@ Value *DataFlowSanitizer::loadNextOrigin(Instruction *Pos, Align OriginAlign,
std::pair<Value *, Value *> DFSanFunction::loadShadowFast(
Value *ShadowAddr, Value *OriginAddr, uint64_t Size, Align ShadowAlign,
- Align OriginAlign, Value *FirstOrigin, Instruction *Pos) {
+ Align OriginAlign, Value *FirstOrigin, BasicBlock::iterator Pos) {
const bool ShouldTrackOrigins = DFS.shouldTrackOrigins();
const uint64_t ShadowSize = Size * DFS.ShadowWidthBytes;
@@ -2163,7 +2174,7 @@ std::pair<Value *, Value *> DFSanFunction::loadShadowFast(
Type *WideShadowTy =
ShadowSize == 4 ? Type::getInt32Ty(*DFS.Ctx) : Type::getInt64Ty(*DFS.Ctx);
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *CombinedWideShadow =
IRB.CreateAlignedLoad(WideShadowTy, ShadowAddr, ShadowAlign);
@@ -2225,14 +2236,14 @@ std::pair<Value *, Value *> DFSanFunction::loadShadowFast(
}
std::pair<Value *, Value *> DFSanFunction::loadShadowOriginSansLoadTracking(
- Value *Addr, uint64_t Size, Align InstAlignment, Instruction *Pos) {
+ Value *Addr, uint64_t Size, Align InstAlignment, BasicBlock::iterator Pos) {
const bool ShouldTrackOrigins = DFS.shouldTrackOrigins();
// Non-escaped loads.
if (AllocaInst *AI = dyn_cast<AllocaInst>(Addr)) {
const auto SI = AllocaShadowMap.find(AI);
if (SI != AllocaShadowMap.end()) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *ShadowLI = IRB.CreateLoad(DFS.PrimitiveShadowTy, SI->second);
const auto OI = AllocaOriginMap.find(AI);
assert(!ShouldTrackOrigins || OI != AllocaOriginMap.end());
@@ -2267,7 +2278,7 @@ std::pair<Value *, Value *> DFSanFunction::loadShadowOriginSansLoadTracking(
// tracking.
if (ShouldTrackOrigins &&
useCallbackLoadLabelAndOrigin(Size, InstAlignment)) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
CallInst *Call =
IRB.CreateCall(DFS.DFSanLoadLabelAndOriginFn,
{Addr, ConstantInt::get(DFS.IntptrTy, Size)});
@@ -2286,7 +2297,7 @@ std::pair<Value *, Value *> DFSanFunction::loadShadowOriginSansLoadTracking(
const Align OriginAlign = getOriginAlign(InstAlignment);
Value *Origin = nullptr;
if (ShouldTrackOrigins) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Origin = IRB.CreateAlignedLoad(DFS.OriginTy, OriginAddr, OriginAlign);
}
@@ -2299,7 +2310,7 @@ std::pair<Value *, Value *> DFSanFunction::loadShadowOriginSansLoadTracking(
return {LI, Origin};
}
case 2: {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *ShadowAddr1 = IRB.CreateGEP(DFS.PrimitiveShadowTy, ShadowAddr,
ConstantInt::get(DFS.IntptrTy, 1));
Value *Load =
@@ -2315,23 +2326,22 @@ std::pair<Value *, Value *> DFSanFunction::loadShadowOriginSansLoadTracking(
return loadShadowFast(ShadowAddr, OriginAddr, Size, ShadowAlign,
OriginAlign, Origin, Pos);
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
CallInst *FallbackCall = IRB.CreateCall(
DFS.DFSanUnionLoadFn, {ShadowAddr, ConstantInt::get(DFS.IntptrTy, Size)});
FallbackCall->addRetAttr(Attribute::ZExt);
return {FallbackCall, Origin};
}
-std::pair<Value *, Value *> DFSanFunction::loadShadowOrigin(Value *Addr,
- uint64_t Size,
- Align InstAlignment,
- Instruction *Pos) {
+std::pair<Value *, Value *>
+DFSanFunction::loadShadowOrigin(Value *Addr, uint64_t Size, Align InstAlignment,
+ BasicBlock::iterator Pos) {
Value *PrimitiveShadow, *Origin;
std::tie(PrimitiveShadow, Origin) =
loadShadowOriginSansLoadTracking(Addr, Size, InstAlignment, Pos);
if (DFS.shouldTrackOrigins()) {
if (ClTrackOrigins == 2) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
auto *ConstantShadow = dyn_cast<Constant>(PrimitiveShadow);
if (!ConstantShadow || !ConstantShadow->isZeroValue())
Origin = updateOriginIfTainted(PrimitiveShadow, Origin, IRB);
@@ -2397,8 +2407,11 @@ void DFSanVisitor::visitLoadInst(LoadInst &LI) {
if (LI.isAtomic())
LI.setOrdering(addAcquireOrdering(LI.getOrdering()));
- Instruction *AfterLi = LI.getNextNode();
- Instruction *Pos = LI.isAtomic() ? LI.getNextNode() : &LI;
+ BasicBlock::iterator AfterLi = std::next(LI.getIterator());
+ BasicBlock::iterator Pos = LI.getIterator();
+ if (LI.isAtomic())
+ Pos = std::next(Pos);
+
std::vector<Value *> Shadows;
std::vector<Value *> Origins;
Value *PrimitiveShadow, *Origin;
@@ -2431,14 +2444,14 @@ void DFSanVisitor::visitLoadInst(LoadInst &LI) {
}
if (ClEventCallbacks) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *Addr = LI.getPointerOperand();
CallInst *CI =
IRB.CreateCall(DFSF.DFS.DFSanLoadCallbackFn, {PrimitiveShadow, Addr});
CI->addParamAttr(0, Attribute::ZExt);
}
- IRBuilder<> IRB(AfterLi);
+ IRBuilder<> IRB(AfterLi->getParent(), AfterLi);
DFSF.addReachesFunctionCallbacksIfEnabled(IRB, LI, &LI);
}
@@ -2510,14 +2523,14 @@ Value *DFSanFunction::convertToBool(Value *V, IRBuilder<> &IRB,
return IRB.CreateICmpNE(V, ConstantInt::get(VTy, 0), Name);
}
-void DFSanFunction::storeOrigin(Instruction *Pos, Value *Addr, uint64_t Size,
- Value *Shadow, Value *Origin,
+void DFSanFunction::storeOrigin(BasicBlock::iterator Pos, Value *Addr,
+ uint64_t Size, Value *Shadow, Value *Origin,
Value *StoreOriginAddr, Align InstAlignment) {
// Do not write origins for zero shadows because we do not trace origins for
// untainted sinks.
const Align OriginAlignment = getOriginAlign(InstAlignment);
Value *CollapsedShadow = collapseToPrimitiveShadow(Shadow, Pos);
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
if (auto *ConstantShadow = dyn_cast<Constant>(CollapsedShadow)) {
if (!ConstantShadow->isZeroValue())
paintOrigin(IRB, updateOrigin(Origin, IRB), StoreOriginAddr, Size,
@@ -2543,8 +2556,8 @@ void DFSanFunction::storeOrigin(Instruction *Pos, Value *Addr, uint64_t Size,
void DFSanFunction::storeZeroPrimitiveShadow(Value *Addr, uint64_t Size,
Align ShadowAlign,
- Instruction *Pos) {
- IRBuilder<> IRB(Pos);
+ BasicBlock::iterator Pos) {
+ IRBuilder<> IRB(Pos->getParent(), Pos);
IntegerType *ShadowTy =
IntegerType::get(*DFS.Ctx, Size * DFS.ShadowWidthBits);
Value *ExtZeroShadow = ConstantInt::get(ShadowTy, 0);
@@ -2558,13 +2571,13 @@ void DFSanFunction::storePrimitiveShadowOrigin(Value *Addr, uint64_t Size,
Align InstAlignment,
Value *PrimitiveShadow,
Value *Origin,
- Instruction *Pos) {
+ BasicBlock::iterator Pos) {
const bool ShouldTrackOrigins = DFS.shouldTrackOrigins() && Origin;
if (AllocaInst *AI = dyn_cast<AllocaInst>(Addr)) {
const auto SI = AllocaShadowMap.find(AI);
if (SI != AllocaShadowMap.end()) {
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
IRB.CreateStore(PrimitiveShadow, SI->second);
// Do not write origins for 0 shadows because we do not trace origins for
@@ -2584,7 +2597,7 @@ void DFSanFunction::storePrimitiveShadowOrigin(Value *Addr, uint64_t Size,
return;
}
- IRBuilder<> IRB(Pos);
+ IRBuilder<> IRB(Pos->getParent(), Pos);
Value *ShadowAddr, *OriginAddr;
std::tie(ShadowAddr, OriginAddr) =
DFS.getShadowOriginAddress(Addr, InstAlignment, Pos);
@@ -2679,15 +2692,15 @@ void DFSanVisitor::visitStoreInst(StoreInst &SI) {
Shadows.push_back(PtrShadow);
Origins.push_back(DFSF.getOrigin(SI.getPointerOperand()));
}
- PrimitiveShadow = DFSF.combineShadows(Shadow, PtrShadow, &SI);
+ PrimitiveShadow = DFSF.combineShadows(Shadow, PtrShadow, SI.getIterator());
} else {
- PrimitiveShadow = DFSF.collapseToPrimitiveShadow(Shadow, &SI);
+ PrimitiveShadow = DFSF.collapseToPrimitiveShadow(Shadow, SI.getIterator());
}
Value *Origin = nullptr;
if (ShouldTrackOrigins)
- Origin = DFSF.combineOrigins(Shadows, Origins, &SI);
+ Origin = DFSF.combineOrigins(Shadows, Origins, SI.getIterator());
DFSF.storePrimitiveShadowOrigin(SI.getPointerOperand(), Size, SI.getAlign(),
- PrimitiveShadow, Origin, &SI);
+ PrimitiveShadow, Origin, SI.getIterator());
if (ClEventCallbacks) {
IRBuilder<> IRB(&SI);
Value *Addr = SI.getPointerOperand();
@@ -2711,7 +2724,7 @@ void DFSanVisitor::visitCASOrRMW(Align InstAlignment, Instruction &I) {
IRBuilder<> IRB(&I);
Value *Addr = I.getOperand(0);
const Align ShadowAlign = DFSF.getShadowAlign(InstAlignment);
- DFSF.storeZeroPrimitiveShadow(Addr, Size, ShadowAlign, &I);
+ DFSF.storeZeroPrimitiveShadow(Addr, Size, ShadowAlign, I.getIterator());
DFSF.setShadow(&I, DFSF.DFS.getZeroShadow(&I));
DFSF.setOrigin(&I, DFSF.DFS.ZeroOrigin);
}
@@ -2866,7 +2879,7 @@ void DFSanVisitor::visitSelectInst(SelectInst &I) {
if (isa<VectorType>(I.getCondition()->getType())) {
ShadowSel = DFSF.combineShadowsThenConvert(I.getType(), TrueShadow,
- FalseShadow, &I);
+ FalseShadow, I.getIterator());
if (ShouldTrackOrigins) {
Shadows.push_back(TrueShadow);
Shadows.push_back(FalseShadow);
@@ -2881,25 +2894,25 @@ void DFSanVisitor::visitSelectInst(SelectInst &I) {
Origins.push_back(TrueOrigin);
}
} else {
- ShadowSel =
- SelectInst::Create(I.getCondition(), TrueShadow, FalseShadow, "", &I);
+ ShadowSel = SelectInst::Create(I.getCondition(), TrueShadow, FalseShadow,
+ "", I.getIterator());
if (ShouldTrackOrigins) {
Shadows.push_back(ShadowSel);
Origins.push_back(SelectInst::Create(I.getCondition(), TrueOrigin,
- FalseOrigin, "", &I));
+ FalseOrigin, "", I.getIterator()));
}
}
}
- DFSF.setShadow(&I, ClTrackSelectControlFlow
- ? DFSF.combineShadowsThenConvert(
- I.getType(), CondShadow, ShadowSel, &I)
- : ShadowSel);
+ DFSF.setShadow(&I, ClTrackSelectControlFlow ? DFSF.combineShadowsThenConvert(
+ I.getType(), CondShadow,
+ ShadowSel, I.getIterator())
+ : ShadowSel);
if (ShouldTrackOrigins) {
if (ClTrackSelectControlFlow) {
Shadows.push_back(CondShadow);
Origins.push_back(DFSF.getOrigin(I.getCondition()));
}
- DFSF.setOrigin(&I, DFSF.combineOrigins(Shadows, Origins, &I));
+ DFSF.setOrigin(&I, DFSF.combineOrigins(Shadows, Origins, I.getIterator()));
}
}
@@ -2926,8 +2939,8 @@ void DFSanVisitor::visitMemTransferInst(MemTransferInst &I) {
IRB.CreateIntCast(I.getArgOperand(2), DFSF.DFS.IntptrTy, false)});
}
- Value *DestShadow = DFSF.DFS.getShadowAddress(I.getDest(), &I);
- Value *SrcShadow = DFSF.DFS.getShadowAddress(I.getSource(), &I);
+ Value *DestShadow = DFSF.DFS.getShadowAddress(I.getDest(), I.getIterator());
+ Value *SrcShadow = DFSF.DFS.getShadowAddress(I.getSource(), I.getIterator());
Value *LenShadow =
IRB.CreateMul(I.getLength(), ConstantInt::get(I.getLength()->getType(),
DFSF.DFS.ShadowWidthBytes));
@@ -2996,7 +3009,8 @@ void DFSanVisitor::addShadowArguments(Function &F, CallBase &CB,
// Adds non-variable argument shadows.
for (unsigned N = FT->getNumParams(); N != 0; ++I, --N)
- Args.push_back(DFSF.collapseToPrimitiveShadow(DFSF.getShadow(*I), &CB));
+ Args.push_back(
+ DFSF.collapseToPrimitiveShadow(DFSF.getShadow(*I), CB.getIterator()));
// Adds variable argument shadows.
if (FT->isVarArg()) {
@@ -3004,12 +3018,13 @@ void DFSanVisitor::addShadowArguments(Function &F, CallBase &CB,
CB.arg_size() - FT->getNumParams());
auto *LabelVAAlloca =
new AllocaInst(LabelVATy, getDataLayout().getAllocaAddrSpace(),
- "labelva", &DFSF.F->getEntryBlock().front());
+ "labelva", DFSF.F->getEntryBlock().begin());
for (unsigned N = 0; I != CB.arg_end(); ++I, ++N) {
auto *LabelVAPtr = IRB.CreateStructGEP(LabelVATy, LabelVAAlloca, N);
- IRB.CreateStore(DFSF.collapseToPrimitiveShadow(DFSF.getShadow(*I), &CB),
- LabelVAPtr);
+ IRB.CreateStore(
+ DFSF.collapseToPrimitiveShadow(DFSF.getShadow(*I), CB.getIterator()),
+ LabelVAPtr);
}
Args.push_back(IRB.CreateStructGEP(LabelVATy, LabelVAAlloca, 0));
@@ -3020,7 +3035,7 @@ void DFSanVisitor::addShadowArguments(Function &F, CallBase &CB,
if (!DFSF.LabelReturnAlloca) {
DFSF.LabelReturnAlloca = new AllocaInst(
DFSF.DFS.PrimitiveShadowTy, getDataLayout().getAllocaAddrSpace(),
- "labelreturn", &DFSF.F->getEntryBlock().front());
+ "labelreturn", DFSF.F->getEntryBlock().begin());
}
Args.push_back(DFSF.LabelReturnAlloca);
}
@@ -3043,7 +3058,7 @@ void DFSanVisitor::addOriginArguments(Function &F, CallBase &CB,
ArrayType::get(DFSF.DFS.OriginTy, CB.arg_size() - FT->getNumParams());
auto *OriginVAAlloca =
new AllocaInst(OriginVATy, getDataLayout().getAllocaAddrSpace(),
- "originva", &DFSF.F->getEntryBlock().front());
+ "originva", DFSF.F->getEntryBlock().begin());
for (unsigned N = 0; I != CB.arg_end(); ++I, ++N) {
auto *OriginVAPtr = IRB.CreateStructGEP(OriginVATy, OriginVAAlloca, N);
@@ -3058,7 +3073,7 @@ void DFSanVisitor::addOriginArguments(Function &F, CallBase &CB,
if (!DFSF.OriginReturnAlloca) {
DFSF.OriginReturnAlloca = new AllocaInst(
DFSF.DFS.OriginTy, getDataLayout().getAllocaAddrSpace(),
- "originreturn", &DFSF.F->getEntryBlock().front());
+ "originreturn", DFSF.F->getEntryBlock().begin());
}
Args.push_back(DFSF.OriginReturnAlloca);
}
@@ -3155,8 +3170,9 @@ bool DFSanVisitor::visitWrappedCallBase(Function &F, CallBase &CB) {
if (!FT->getReturnType()->isVoidTy()) {
LoadInst *LabelLoad =
IRB.CreateLoad(DFSF.DFS.PrimitiveShadowTy, DFSF.LabelReturnAlloca);
- DFSF.setShadow(CustomCI, DFSF.expandFromPrimitiveShadow(
- FT->getReturnType(), LabelLoad, &CB));
+ DFSF.setShadow(CustomCI,
+ DFSF.expandFromPrimitiveShadow(
+ FT->getReturnType(), LabelLoad, CB.getIterator()));
if (ShouldTrackOrigins) {
LoadInst *OriginLoad =
IRB.CreateLoad(DFSF.DFS.OriginTy, DFSF.OriginReturnAlloca);
@@ -3433,8 +3449,8 @@ void DFSanVisitor::visitCallBase(CallBase &CB) {
void DFSanVisitor::visitPHINode(PHINode &PN) {
Type *ShadowTy = DFSF.DFS.getShadowTy(&PN);
- PHINode *ShadowPN =
- PHINode::Create(ShadowTy, PN.getNumIncomingValues(), "", &PN);
+ PHINode *ShadowPN = PHINode::Create(ShadowTy, PN.getNumIncomingValues(), "",
+ PN.getIterator());
// Give the shadow phi node valid predecessors to fool SplitEdge into working.
Value *UndefShadow = UndefValue::get(ShadowTy);
@@ -3445,8 +3461,8 @@ void DFSanVisitor::visitPHINode(PHINode &PN) {
PHINode *OriginPN = nullptr;
if (DFSF.DFS.shouldTrackOrigins()) {
- OriginPN =
- PHINode::Create(DFSF.DFS.OriginTy, PN.getNumIncomingValues(), "", &PN);
+ OriginPN = PHINode::Create(DFSF.DFS.OriginTy, PN.getNumIncomingValues(), "",
+ PN.getIterator());
Value *UndefOrigin = UndefValue::get(DFSF.DFS.OriginTy);
for (BasicBlock *BB : PN.blocks())
OriginPN->addIncoming(UndefOrigin, BB);
diff --git a/llvm/lib/Transforms/Instrumentation/KCFI.cpp b/llvm/lib/Transforms/Instrumentation/KCFI.cpp
index b1a26880c70106..b22e7f7fc0bee0 100644
--- a/llvm/lib/Transforms/Instrumentation/KCFI.cpp
+++ b/llvm/lib/Transforms/Instrumentation/KCFI.cpp
@@ -82,8 +82,8 @@ PreservedAnalyses KCFIPass::run(Function &F, FunctionAnalysisManager &AM) {
->getZExtValue();
// Drop the KCFI operand bundle.
- CallBase *Call =
- CallBase::removeOperandBundle(CI, LLVMContext::OB_kcfi, CI);
+ CallBase *Call = CallBase::removeOperandBundle(CI, LLVMContext::OB_kcfi,
+ CI->getIterator());
assert(Call != CI);
Call->copyMetadata(*CI);
CI->replaceAllUsesWith(Call);
diff --git a/llvm/lib/Transforms/ObjCARC/ObjCARC.cpp b/llvm/lib/Transforms/ObjCARC/ObjCARC.cpp
index 02f9db719e2616..33870d7ea192a3 100644
--- a/llvm/lib/Transforms/ObjCARC/ObjCARC.cpp
+++ b/llvm/lib/Transforms/ObjCARC/ObjCARC.cpp
@@ -23,7 +23,7 @@ using namespace llvm::objcarc;
CallInst *objcarc::createCallInstWithColors(
FunctionCallee Func, ArrayRef<Value *> Args, const Twine &NameStr,
- Instruction *InsertBefore,
+ BasicBlock::iterator InsertBefore,
const DenseMap<BasicBlock *, ColorVector> &BlockColors) {
FunctionType *FTy = Func.getFunctionType();
Value *Callee = Func.getCallee();
@@ -64,23 +64,23 @@ BundledRetainClaimRVs::insertAfterInvokes(Function &F, DominatorTree *DT) {
// We don't have to call insertRVCallWithColors since DestBB is the normal
// destination of the invoke.
- insertRVCall(&*DestBB->getFirstInsertionPt(), I);
+ insertRVCall(DestBB->getFirstInsertionPt(), I);
Changed = true;
}
return std::make_pair(Changed, CFGChanged);
}
-CallInst *BundledRetainClaimRVs::insertRVCall(Instruction *InsertPt,
+CallInst *BundledRetainClaimRVs::insertRVCall(BasicBlock::iterator InsertPt,
CallBase *AnnotatedCall) {
DenseMap<BasicBlock *, ColorVector> BlockColors;
return insertRVCallWithColors(InsertPt, AnnotatedCall, BlockColors);
}
CallInst *BundledRetainClaimRVs::insertRVCallWithColors(
- Instruction *InsertPt, CallBase *AnnotatedCall,
+ BasicBlock::iterator InsertPt, CallBase *AnnotatedCall,
const DenseMap<BasicBlock *, ColorVector> &BlockColors) {
- IRBuilder<> Builder(InsertPt);
+ IRBuilder<> Builder(InsertPt->getParent(), InsertPt);
Function *Func = *objcarc::getAttachedARCFunction(AnnotatedCall);
assert(Func && "operand isn't a Function");
Type *ParamTy = Func->getArg(0)->getType();
diff --git a/llvm/lib/Transforms/ObjCARC/ObjCARC.h b/llvm/lib/Transforms/ObjCARC/ObjCARC.h
index 9e68bd574851b5..f4d7c92d499c1f 100644
--- a/llvm/lib/Transforms/ObjCARC/ObjCARC.h
+++ b/llvm/lib/Transforms/ObjCARC/ObjCARC.h
@@ -99,7 +99,7 @@ static inline MDString *getRVInstMarker(Module &M) {
/// going to be removed from the IR before WinEHPrepare.
CallInst *createCallInstWithColors(
FunctionCallee Func, ArrayRef<Value *> Args, const Twine &NameStr,
- Instruction *InsertBefore,
+ BasicBlock::iterator InsertBefore,
const DenseMap<BasicBlock *, ColorVector> &BlockColors);
class BundledRetainClaimRVs {
@@ -113,11 +113,12 @@ class BundledRetainClaimRVs {
std::pair<bool, bool> insertAfterInvokes(Function &F, DominatorTree *DT);
/// Insert a retainRV/claimRV call.
- CallInst *insertRVCall(Instruction *InsertPt, CallBase *AnnotatedCall);
+ CallInst *insertRVCall(BasicBlock::iterator InsertPt,
+ CallBase *AnnotatedCall);
/// Insert a retainRV/claimRV call with colors.
CallInst *insertRVCallWithColors(
- Instruction *InsertPt, CallBase *AnnotatedCall,
+ BasicBlock::iterator InsertPt, CallBase *AnnotatedCall,
const DenseMap<BasicBlock *, ColorVector> &BlockColors);
/// See if an instruction is a bundled retainRV/claimRV call.
@@ -140,7 +141,8 @@ class BundledRetainClaimRVs {
}
auto *NewCall = CallBase::removeOperandBundle(
- It->second, LLVMContext::OB_clang_arc_attachedcall, It->second);
+ It->second, LLVMContext::OB_clang_arc_attachedcall,
+ It->second->getIterator());
NewCall->copyMetadata(*It->second);
It->second->replaceAllUsesWith(NewCall);
It->second->eraseFromParent();
diff --git a/llvm/lib/Transforms/ObjCARC/ObjCARCContract.cpp b/llvm/lib/Transforms/ObjCARC/ObjCARCContract.cpp
index c397ab63f388c4..0d0f5c72928ab7 100644
--- a/llvm/lib/Transforms/ObjCARC/ObjCARCContract.cpp
+++ b/llvm/lib/Transforms/ObjCARC/ObjCARCContract.cpp
@@ -382,12 +382,12 @@ void ObjCARCContract::tryToContractReleaseIntoStoreStrong(
Value *Args[] = { Load->getPointerOperand(), New };
if (Args[0]->getType() != I8XX)
- Args[0] = new BitCastInst(Args[0], I8XX, "", Store);
+ Args[0] = new BitCastInst(Args[0], I8XX, "", Store->getIterator());
if (Args[1]->getType() != I8X)
- Args[1] = new BitCastInst(Args[1], I8X, "", Store);
+ Args[1] = new BitCastInst(Args[1], I8X, "", Store->getIterator());
Function *Decl = EP.get(ARCRuntimeEntryPointKind::StoreStrong);
- CallInst *StoreStrong =
- objcarc::createCallInstWithColors(Decl, Args, "", Store, BlockColors);
+ CallInst *StoreStrong = objcarc::createCallInstWithColors(
+ Decl, Args, "", Store->getIterator(), BlockColors);
StoreStrong->setDoesNotThrow();
StoreStrong->setDebugLoc(Store->getDebugLoc());
@@ -472,8 +472,8 @@ bool ObjCARCContract::tryToPeepholeInstruction(
RVInstMarker->getString(),
/*Constraints=*/"", /*hasSideEffects=*/true);
- objcarc::createCallInstWithColors(IA, std::nullopt, "", Inst,
- BlockColors);
+ objcarc::createCallInstWithColors(IA, std::nullopt, "",
+ Inst->getIterator(), BlockColors);
}
decline_rv_optimization:
return false;
@@ -484,7 +484,7 @@ bool ObjCARCContract::tryToPeepholeInstruction(
if (IsNullOrUndef(CI->getArgOperand(1))) {
Value *Null = ConstantPointerNull::get(cast<PointerType>(CI->getType()));
Changed = true;
- new StoreInst(Null, CI->getArgOperand(0), CI);
+ new StoreInst(Null, CI->getArgOperand(0), CI->getIterator());
LLVM_DEBUG(dbgs() << "OBJCARCContract: Old = " << *CI << "\n"
<< " New = " << *Null << "\n");
@@ -575,7 +575,7 @@ bool ObjCARCContract::run(Function &F, AAResults *A, DominatorTree *D) {
if (auto *CI = dyn_cast<CallInst>(Inst))
if (objcarc::hasAttachedCallOpBundle(CI)) {
- BundledInsts->insertRVCallWithColors(&*I, CI, BlockColors);
+ BundledInsts->insertRVCallWithColors(I->getIterator(), CI, BlockColors);
--I;
Changed = true;
}
@@ -631,8 +631,8 @@ bool ObjCARCContract::run(Function &F, AAResults *A, DominatorTree *D) {
assert(DT->dominates(Inst, &InsertBB->back()) &&
"Invalid insertion point for bitcast");
- Replacement =
- new BitCastInst(Replacement, UseTy, "", &InsertBB->back());
+ Replacement = new BitCastInst(Replacement, UseTy, "",
+ InsertBB->back().getIterator());
}
// While we're here, rewrite all edges for this PHI, rather
@@ -649,8 +649,9 @@ bool ObjCARCContract::run(Function &F, AAResults *A, DominatorTree *D) {
}
} else {
if (Replacement->getType() != UseTy)
- Replacement = new BitCastInst(Replacement, UseTy, "",
- cast<Instruction>(U.getUser()));
+ Replacement =
+ new BitCastInst(Replacement, UseTy, "",
+ cast<Instruction>(U.getUser())->getIterator());
U.set(Replacement);
}
}
diff --git a/llvm/lib/Transforms/ObjCARC/ObjCARCOpts.cpp b/llvm/lib/Transforms/ObjCARC/ObjCARCOpts.cpp
index b51e4d46bffeb2..72e860d7dcfa61 100644
--- a/llvm/lib/Transforms/ObjCARC/ObjCARCOpts.cpp
+++ b/llvm/lib/Transforms/ObjCARC/ObjCARCOpts.cpp
@@ -693,8 +693,9 @@ bool ObjCARCOpt::OptimizeInlinedAutoreleaseRVCall(
// AutoreleaseRV and RetainRV cancel out, replace UnsafeClaimRV with Release.
assert(Class == ARCInstKind::UnsafeClaimRV);
Value *CallArg = cast<CallInst>(Inst)->getArgOperand(0);
- CallInst *Release = CallInst::Create(
- EP.get(ARCRuntimeEntryPointKind::Release), CallArg, "", Inst);
+ CallInst *Release =
+ CallInst::Create(EP.get(ARCRuntimeEntryPointKind::Release), CallArg, "",
+ Inst->getIterator());
assert(IsAlwaysTail(ARCInstKind::UnsafeClaimRV) &&
"Expected UnsafeClaimRV to be safe to tail call");
Release->setTailCall();
@@ -808,7 +809,7 @@ void ObjCARCOpt::OptimizeIndividualCalls(Function &F) {
if (auto *CI = dyn_cast<CallInst>(Inst))
if (objcarc::hasAttachedCallOpBundle(CI)) {
- BundledInsts->insertRVCall(&*I, CI);
+ BundledInsts->insertRVCall(I->getIterator(), CI);
Changed = true;
}
@@ -934,7 +935,7 @@ void ObjCARCOpt::OptimizeIndividualCallImpl(Function &F, Instruction *Inst,
Changed = true;
new StoreInst(ConstantInt::getTrue(CI->getContext()),
PoisonValue::get(PointerType::getUnqual(CI->getContext())),
- CI);
+ CI->getIterator());
Value *NewValue = PoisonValue::get(CI->getType());
LLVM_DEBUG(
dbgs() << "A null pointer-to-weak-pointer is undefined behavior."
@@ -954,7 +955,7 @@ void ObjCARCOpt::OptimizeIndividualCallImpl(Function &F, Instruction *Inst,
Changed = true;
new StoreInst(ConstantInt::getTrue(CI->getContext()),
PoisonValue::get(PointerType::getUnqual(CI->getContext())),
- CI);
+ CI->getIterator());
Value *NewValue = PoisonValue::get(CI->getType());
LLVM_DEBUG(
@@ -990,8 +991,8 @@ void ObjCARCOpt::OptimizeIndividualCallImpl(Function &F, Instruction *Inst,
LLVMContext &C = Inst->getContext();
Function *Decl = EP.get(ARCRuntimeEntryPointKind::Release);
- CallInst *NewCall =
- CallInst::Create(Decl, Call->getArgOperand(0), "", Call);
+ CallInst *NewCall = CallInst::Create(Decl, Call->getArgOperand(0), "",
+ Call->getIterator());
NewCall->setMetadata(MDKindCache.get(ARCMDKindID::ImpreciseRelease),
MDNode::get(C, std::nullopt));
@@ -1143,7 +1144,8 @@ void ObjCARCOpt::OptimizeIndividualCallImpl(Function &F, Instruction *Inst,
if (IsNullOrUndef(Incoming))
continue;
Value *Op = PN->getIncomingValue(i);
- Instruction *InsertPos = &PN->getIncomingBlock(i)->back();
+ BasicBlock::iterator InsertPos =
+ PN->getIncomingBlock(i)->back().getIterator();
SmallVector<OperandBundleDef, 1> OpBundles;
cloneOpBundlesIf(CInst, OpBundles, [](const OperandBundleUse &B) {
return B.getTagID() != LLVMContext::OB_funclet;
@@ -1153,7 +1155,7 @@ void ObjCARCOpt::OptimizeIndividualCallImpl(Function &F, Instruction *Inst,
if (Op->getType() != ParamTy)
Op = new BitCastInst(Op, ParamTy, "", InsertPos);
Clone->setArgOperand(0, Op);
- Clone->insertBefore(InsertPos);
+ Clone->insertBefore(*InsertPos->getParent(), InsertPos);
LLVM_DEBUG(dbgs() << "Cloning " << *CInst << "\n"
"And inserting clone at "
@@ -1768,12 +1770,14 @@ void ObjCARCOpt::MoveCalls(Value *Arg, RRInfo &RetainsToMove,
// Insert the new retain and release calls.
for (Instruction *InsertPt : ReleasesToMove.ReverseInsertPts) {
- Value *MyArg = ArgTy == ParamTy ? Arg :
- new BitCastInst(Arg, ParamTy, "", InsertPt);
+ Value *MyArg = ArgTy == ParamTy ? Arg
+ : new BitCastInst(Arg, ParamTy, "",
+ InsertPt->getIterator());
Function *Decl = EP.get(ARCRuntimeEntryPointKind::Retain);
SmallVector<OperandBundleDef, 1> BundleList;
addOpBundleForFunclet(InsertPt->getParent(), BundleList);
- CallInst *Call = CallInst::Create(Decl, MyArg, BundleList, "", InsertPt);
+ CallInst *Call =
+ CallInst::Create(Decl, MyArg, BundleList, "", InsertPt->getIterator());
Call->setDoesNotThrow();
Call->setTailCall();
@@ -1783,12 +1787,14 @@ void ObjCARCOpt::MoveCalls(Value *Arg, RRInfo &RetainsToMove,
<< *InsertPt << "\n");
}
for (Instruction *InsertPt : RetainsToMove.ReverseInsertPts) {
- Value *MyArg = ArgTy == ParamTy ? Arg :
- new BitCastInst(Arg, ParamTy, "", InsertPt);
+ Value *MyArg = ArgTy == ParamTy ? Arg
+ : new BitCastInst(Arg, ParamTy, "",
+ InsertPt->getIterator());
Function *Decl = EP.get(ARCRuntimeEntryPointKind::Release);
SmallVector<OperandBundleDef, 1> BundleList;
addOpBundleForFunclet(InsertPt->getParent(), BundleList);
- CallInst *Call = CallInst::Create(Decl, MyArg, BundleList, "", InsertPt);
+ CallInst *Call =
+ CallInst::Create(Decl, MyArg, BundleList, "", InsertPt->getIterator());
// Attach a clang.imprecise_release metadata tag, if appropriate.
if (MDNode *M = ReleasesToMove.ReleaseMetadata)
Call->setMetadata(MDKindCache.get(ARCMDKindID::ImpreciseRelease), M);
@@ -2125,7 +2131,8 @@ void ObjCARCOpt::OptimizeWeakCalls(Function &F) {
// If the load has a builtin retain, insert a plain retain for it.
if (Class == ARCInstKind::LoadWeakRetained) {
Function *Decl = EP.get(ARCRuntimeEntryPointKind::Retain);
- CallInst *CI = CallInst::Create(Decl, EarlierCall, "", Call);
+ CallInst *CI =
+ CallInst::Create(Decl, EarlierCall, "", Call->getIterator());
CI->setTailCall();
}
// Zap the fully redundant load.
@@ -2154,7 +2161,8 @@ void ObjCARCOpt::OptimizeWeakCalls(Function &F) {
// If the load has a builtin retain, insert a plain retain for it.
if (Class == ARCInstKind::LoadWeakRetained) {
Function *Decl = EP.get(ARCRuntimeEntryPointKind::Retain);
- CallInst *CI = CallInst::Create(Decl, EarlierCall, "", Call);
+ CallInst *CI =
+ CallInst::Create(Decl, EarlierCall, "", Call->getIterator());
CI->setTailCall();
}
// Zap the fully redundant load.
diff --git a/llvm/lib/Transforms/Scalar/CorrelatedValuePropagation.cpp b/llvm/lib/Transforms/Scalar/CorrelatedValuePropagation.cpp
index 490cb7e528eb6f..7a2011888ab008 100644
--- a/llvm/lib/Transforms/Scalar/CorrelatedValuePropagation.cpp
+++ b/llvm/lib/Transforms/Scalar/CorrelatedValuePropagation.cpp
@@ -590,7 +590,7 @@ static bool processSaturatingInst(SaturatingInst *SI, LazyValueInfo *LVI) {
bool NSW = SI->isSigned();
bool NUW = !SI->isSigned();
BinaryOperator *BinOp = BinaryOperator::Create(
- Opcode, SI->getLHS(), SI->getRHS(), SI->getName(), SI);
+ Opcode, SI->getLHS(), SI->getRHS(), SI->getName(), SI->getIterator());
BinOp->setDebugLoc(SI->getDebugLoc());
setDeducedOverflowingFlags(BinOp, Opcode, NSW, NUW);
@@ -911,8 +911,8 @@ static bool processSRem(BinaryOperator *SDI, const ConstantRange &LCR,
Op.V = BO;
}
- auto *URem =
- BinaryOperator::CreateURem(Ops[0].V, Ops[1].V, SDI->getName(), SDI);
+ auto *URem = BinaryOperator::CreateURem(Ops[0].V, Ops[1].V, SDI->getName(),
+ SDI->getIterator());
URem->setDebugLoc(SDI->getDebugLoc());
auto *Res = URem;
@@ -973,8 +973,8 @@ static bool processSDiv(BinaryOperator *SDI, const ConstantRange &LCR,
Op.V = BO;
}
- auto *UDiv =
- BinaryOperator::CreateUDiv(Ops[0].V, Ops[1].V, SDI->getName(), SDI);
+ auto *UDiv = BinaryOperator::CreateUDiv(Ops[0].V, Ops[1].V, SDI->getName(),
+ SDI->getIterator());
UDiv->setDebugLoc(SDI->getDebugLoc());
UDiv->setIsExact(SDI->isExact());
@@ -1041,7 +1041,7 @@ static bool processAShr(BinaryOperator *SDI, LazyValueInfo *LVI) {
++NumAShrsConverted;
auto *BO = BinaryOperator::CreateLShr(SDI->getOperand(0), SDI->getOperand(1),
- "", SDI);
+ "", SDI->getIterator());
BO->takeName(SDI);
BO->setDebugLoc(SDI->getDebugLoc());
BO->setIsExact(SDI->isExact());
@@ -1061,7 +1061,8 @@ static bool processSExt(SExtInst *SDI, LazyValueInfo *LVI) {
return false;
++NumSExt;
- auto *ZExt = CastInst::CreateZExtOrBitCast(Base, SDI->getType(), "", SDI);
+ auto *ZExt = CastInst::CreateZExtOrBitCast(Base, SDI->getType(), "",
+ SDI->getIterator());
ZExt->takeName(SDI);
ZExt->setDebugLoc(SDI->getDebugLoc());
ZExt->setNonNeg();
diff --git a/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp b/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
index c2c63d100014f7..3d1dac5ea17e2d 100644
--- a/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
+++ b/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
@@ -634,7 +634,8 @@ static bool tryToShorten(Instruction *DeadI, int64_t &DeadStart,
Value *Indices[1] = {
ConstantInt::get(DeadWriteLength->getType(), ToRemoveSize)};
Instruction *NewDestGEP = GetElementPtrInst::CreateInBounds(
- Type::getInt8Ty(DeadIntrinsic->getContext()), OrigDest, Indices, "", DeadI);
+ Type::getInt8Ty(DeadIntrinsic->getContext()), OrigDest, Indices, "",
+ DeadI->getIterator());
NewDestGEP->setDebugLoc(DeadIntrinsic->getDebugLoc());
DeadIntrinsic->setDest(NewDestGEP);
}
diff --git a/llvm/lib/Transforms/Scalar/DivRemPairs.cpp b/llvm/lib/Transforms/Scalar/DivRemPairs.cpp
index 57d3f312186ec3..45f36a36b5dd0e 100644
--- a/llvm/lib/Transforms/Scalar/DivRemPairs.cpp
+++ b/llvm/lib/Transforms/Scalar/DivRemPairs.cpp
@@ -383,14 +383,16 @@ static bool optimizeDivRem(Function &F, const TargetTransformInfo &TTI,
// If X is not frozen, %rem becomes undef after transformation.
// TODO: We need a undef-specific checking function in ValueTracking
if (!isGuaranteedNotToBeUndefOrPoison(X, nullptr, DivInst, &DT)) {
- auto *FrX = new FreezeInst(X, X->getName() + ".frozen", DivInst);
+ auto *FrX =
+ new FreezeInst(X, X->getName() + ".frozen", DivInst->getIterator());
DivInst->setOperand(0, FrX);
Sub->setOperand(0, FrX);
}
// Same for Y. If X = 1 and Y = (undef | 1), %rem in src is either 1 or 0,
// but %rem in tgt can be one of many integer values.
if (!isGuaranteedNotToBeUndefOrPoison(Y, nullptr, DivInst, &DT)) {
- auto *FrY = new FreezeInst(Y, Y->getName() + ".frozen", DivInst);
+ auto *FrY =
+ new FreezeInst(Y, Y->getName() + ".frozen", DivInst->getIterator());
DivInst->setOperand(1, FrY);
Mul->setOperand(1, FrY);
}
diff --git a/llvm/lib/Transforms/Scalar/GVN.cpp b/llvm/lib/Transforms/Scalar/GVN.cpp
index dcb1ed334b6103..67fb2a5da3bb71 100644
--- a/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -1056,7 +1056,8 @@ Value *AvailableValue::MaterializeAdjustedValue(LoadInst *Load,
// Introduce a new value select for a load from an eligible pointer select.
SelectInst *Sel = getSelectValue();
assert(V1 && V2 && "both value operands of the select must be present");
- Res = SelectInst::Create(Sel->getCondition(), V1, V2, "", Sel);
+ Res =
+ SelectInst::Create(Sel->getCondition(), V1, V2, "", Sel->getIterator());
} else {
llvm_unreachable("Should not materialize value from dead block");
}
@@ -1412,10 +1413,10 @@ void GVNPass::eliminatePartiallyRedundantLoad(
BasicBlock *UnavailableBlock = AvailableLoad.first;
Value *LoadPtr = AvailableLoad.second;
- auto *NewLoad =
- new LoadInst(Load->getType(), LoadPtr, Load->getName() + ".pre",
- Load->isVolatile(), Load->getAlign(), Load->getOrdering(),
- Load->getSyncScopeID(), UnavailableBlock->getTerminator());
+ auto *NewLoad = new LoadInst(
+ Load->getType(), LoadPtr, Load->getName() + ".pre", Load->isVolatile(),
+ Load->getAlign(), Load->getOrdering(), Load->getSyncScopeID(),
+ UnavailableBlock->getTerminator()->getIterator());
NewLoad->setDebugLoc(Load->getDebugLoc());
if (MSSAU) {
auto *NewAccess = MSSAU->createMemoryAccessInBB(
@@ -1994,8 +1995,9 @@ bool GVNPass::processAssumeIntrinsic(AssumeInst *IntrinsicI) {
// Insert a new store to null instruction before the load to indicate that
// this code is not reachable. FIXME: We could insert unreachable
// instruction directly because we can modify the CFG.
- auto *NewS = new StoreInst(PoisonValue::get(Int8Ty),
- Constant::getNullValue(PtrTy), IntrinsicI);
+ auto *NewS =
+ new StoreInst(PoisonValue::get(Int8Ty), Constant::getNullValue(PtrTy),
+ IntrinsicI->getIterator());
if (MSSAU) {
const MemoryUseOrDef *FirstNonDom = nullptr;
const auto *AL =
diff --git a/llvm/lib/Transforms/Scalar/GuardWidening.cpp b/llvm/lib/Transforms/Scalar/GuardWidening.cpp
index 3bbf6642a90cf7..d3787b28347c9e 100644
--- a/llvm/lib/Transforms/Scalar/GuardWidening.cpp
+++ b/llvm/lib/Transforms/Scalar/GuardWidening.cpp
@@ -121,12 +121,13 @@ static void eliminateGuard(Instruction *GuardInst, MemorySSAUpdater *MSSAU) {
/// condition should stay invariant. Otherwise there can be a miscompile, like
/// the one described at https://github.com/llvm/llvm-project/issues/60234. The
/// safest way to do it is to expand the new condition at WC's block.
-static Instruction *findInsertionPointForWideCondition(Instruction *WCOrGuard) {
+static std::optional<BasicBlock::iterator>
+findInsertionPointForWideCondition(Instruction *WCOrGuard) {
if (isGuard(WCOrGuard))
- return WCOrGuard;
+ return WCOrGuard->getIterator();
if (auto WC = extractWidenableCondition(WCOrGuard))
- return cast<Instruction>(WC);
- return nullptr;
+ return cast<Instruction>(WC)->getIterator();
+ return std::nullopt;
}
class GuardWideningImpl {
@@ -182,30 +183,30 @@ class GuardWideningImpl {
/// into \p WideningPoint.
WideningScore computeWideningScore(Instruction *DominatedInstr,
Instruction *ToWiden,
- Instruction *WideningPoint,
+ BasicBlock::iterator WideningPoint,
SmallVectorImpl<Value *> &ChecksToHoist,
SmallVectorImpl<Value *> &ChecksToWiden);
/// Helper to check if \p V can be hoisted to \p InsertPos.
- bool canBeHoistedTo(const Value *V, const Instruction *InsertPos) const {
+ bool canBeHoistedTo(const Value *V, BasicBlock::iterator InsertPos) const {
SmallPtrSet<const Instruction *, 8> Visited;
return canBeHoistedTo(V, InsertPos, Visited);
}
- bool canBeHoistedTo(const Value *V, const Instruction *InsertPos,
+ bool canBeHoistedTo(const Value *V, BasicBlock::iterator InsertPos,
SmallPtrSetImpl<const Instruction *> &Visited) const;
bool canBeHoistedTo(const SmallVectorImpl<Value *> &Checks,
- const Instruction *InsertPos) const {
+ BasicBlock::iterator InsertPos) const {
return all_of(Checks,
[&](const Value *V) { return canBeHoistedTo(V, InsertPos); });
}
/// Helper to hoist \p V to \p InsertPos. Guaranteed to succeed if \c
/// canBeHoistedTo returned true.
- void makeAvailableAt(Value *V, Instruction *InsertPos) const;
+ void makeAvailableAt(Value *V, BasicBlock::iterator InsertPos) const;
void makeAvailableAt(const SmallVectorImpl<Value *> &Checks,
- Instruction *InsertPos) const {
+ BasicBlock::iterator InsertPos) const {
for (Value *V : Checks)
makeAvailableAt(V, InsertPos);
}
@@ -217,18 +218,19 @@ class GuardWideningImpl {
/// InsertPt is true then actually generate the resulting expression, make it
/// available at \p InsertPt and return it in \p Result (else no change to the
/// IR is made).
- std::optional<Value *> mergeChecks(SmallVectorImpl<Value *> &ChecksToHoist,
- SmallVectorImpl<Value *> &ChecksToWiden,
- Instruction *InsertPt);
+ std::optional<Value *>
+ mergeChecks(SmallVectorImpl<Value *> &ChecksToHoist,
+ SmallVectorImpl<Value *> &ChecksToWiden,
+ std::optional<BasicBlock::iterator> InsertPt);
/// Generate the logical AND of \p ChecksToHoist and \p OldCondition and make
/// it available at InsertPt
Value *hoistChecks(SmallVectorImpl<Value *> &ChecksToHoist,
- Value *OldCondition, Instruction *InsertPt);
+ Value *OldCondition, BasicBlock::iterator InsertPt);
/// Adds freeze to Orig and push it as far as possible very aggressively.
/// Also replaces all uses of frozen instruction with frozen version.
- Value *freezeAndPush(Value *Orig, Instruction *InsertPt);
+ Value *freezeAndPush(Value *Orig, BasicBlock::iterator InsertPt);
/// Represents a range check of the form \c Base + \c Offset u< \c Length,
/// with the constraint that \c Length is not negative. \c CheckInst is the
@@ -294,7 +296,7 @@ class GuardWideningImpl {
/// for the price of computing only one of the set of expressions?
bool isWideningCondProfitable(SmallVectorImpl<Value *> &ChecksToHoist,
SmallVectorImpl<Value *> &ChecksToWiden) {
- return mergeChecks(ChecksToHoist, ChecksToWiden, /*InsertPt=*/nullptr)
+ return mergeChecks(ChecksToHoist, ChecksToWiden, /*InsertPt=*/std::nullopt)
.has_value();
}
@@ -302,11 +304,11 @@ class GuardWideningImpl {
void widenGuard(SmallVectorImpl<Value *> &ChecksToHoist,
SmallVectorImpl<Value *> &ChecksToWiden,
Instruction *ToWiden) {
- Instruction *InsertPt = findInsertionPointForWideCondition(ToWiden);
+ auto InsertPt = findInsertionPointForWideCondition(ToWiden);
auto MergedCheck = mergeChecks(ChecksToHoist, ChecksToWiden, InsertPt);
Value *Result = MergedCheck ? *MergedCheck
: hoistChecks(ChecksToHoist,
- getCondition(ToWiden), InsertPt);
+ getCondition(ToWiden), *InsertPt);
if (isGuardAsWidenableBranch(ToWiden)) {
setWidenableBranchCond(cast<BranchInst>(ToWiden), Result);
@@ -417,12 +419,12 @@ bool GuardWideningImpl::eliminateInstrViaWidening(
assert((i == (e - 1)) == (Instr->getParent() == CurBB) && "Bad DFS?");
for (auto *Candidate : make_range(I, E)) {
- auto *WideningPoint = findInsertionPointForWideCondition(Candidate);
+ auto WideningPoint = findInsertionPointForWideCondition(Candidate);
if (!WideningPoint)
continue;
SmallVector<Value *> CandidateChecks;
parseWidenableGuard(Candidate, CandidateChecks);
- auto Score = computeWideningScore(Instr, Candidate, WideningPoint,
+ auto Score = computeWideningScore(Instr, Candidate, *WideningPoint,
ChecksToHoist, CandidateChecks);
LLVM_DEBUG(dbgs() << "Score between " << *Instr << " and " << *Candidate
<< " is " << scoreTypeToString(Score) << "\n");
@@ -456,7 +458,7 @@ bool GuardWideningImpl::eliminateInstrViaWidening(
GuardWideningImpl::WideningScore GuardWideningImpl::computeWideningScore(
Instruction *DominatedInstr, Instruction *ToWiden,
- Instruction *WideningPoint, SmallVectorImpl<Value *> &ChecksToHoist,
+ BasicBlock::iterator WideningPoint, SmallVectorImpl<Value *> &ChecksToHoist,
SmallVectorImpl<Value *> &ChecksToWiden) {
Loop *DominatedInstrLoop = LI.getLoopFor(DominatedInstr->getParent());
Loop *DominatingGuardLoop = LI.getLoopFor(WideningPoint->getParent());
@@ -559,7 +561,7 @@ GuardWideningImpl::WideningScore GuardWideningImpl::computeWideningScore(
}
bool GuardWideningImpl::canBeHoistedTo(
- const Value *V, const Instruction *Loc,
+ const Value *V, BasicBlock::iterator Loc,
SmallPtrSetImpl<const Instruction *> &Visited) const {
auto *Inst = dyn_cast<Instruction>(V);
if (!Inst || DT.dominates(Inst, Loc) || Visited.count(Inst))
@@ -580,7 +582,8 @@ bool GuardWideningImpl::canBeHoistedTo(
[&](Value *Op) { return canBeHoistedTo(Op, Loc, Visited); });
}
-void GuardWideningImpl::makeAvailableAt(Value *V, Instruction *Loc) const {
+void GuardWideningImpl::makeAvailableAt(Value *V,
+ BasicBlock::iterator Loc) const {
auto *Inst = dyn_cast<Instruction>(V);
if (!Inst || DT.dominates(Inst, Loc))
return;
@@ -592,7 +595,7 @@ void GuardWideningImpl::makeAvailableAt(Value *V, Instruction *Loc) const {
for (Value *Op : Inst->operands())
makeAvailableAt(Op, Loc);
- Inst->moveBefore(Loc);
+ Inst->moveBefore(*Loc->getParent(), Loc);
}
// Return Instruction before which we can insert freeze for the value V as close
@@ -621,14 +624,15 @@ getFreezeInsertPt(Value *V, const DominatorTree &DT) {
return Res;
}
-Value *GuardWideningImpl::freezeAndPush(Value *Orig, Instruction *InsertPt) {
+Value *GuardWideningImpl::freezeAndPush(Value *Orig,
+ BasicBlock::iterator InsertPt) {
if (isGuaranteedNotToBePoison(Orig, nullptr, InsertPt, &DT))
return Orig;
std::optional<BasicBlock::iterator> InsertPtAtDef =
getFreezeInsertPt(Orig, DT);
if (!InsertPtAtDef) {
FreezeInst *FI = new FreezeInst(Orig, "gw.freeze");
- FI->insertBefore(InsertPt);
+ FI->insertBefore(*InsertPt->getParent(), InsertPt);
return FI;
}
if (isa<Constant>(Orig) || isa<GlobalValue>(Orig)) {
@@ -715,7 +719,7 @@ Value *GuardWideningImpl::freezeAndPush(Value *Orig, Instruction *InsertPt) {
std::optional<Value *>
GuardWideningImpl::mergeChecks(SmallVectorImpl<Value *> &ChecksToHoist,
SmallVectorImpl<Value *> &ChecksToWiden,
- Instruction *InsertPt) {
+ std::optional<BasicBlock::iterator> InsertPt) {
using namespace llvm::PatternMatch;
Value *Result = nullptr;
@@ -747,10 +751,10 @@ GuardWideningImpl::mergeChecks(SmallVectorImpl<Value *> &ChecksToHoist,
if (Intersect->getEquivalentICmp(Pred, NewRHSAP)) {
if (InsertPt) {
ConstantInt *NewRHS =
- ConstantInt::get(InsertPt->getContext(), NewRHSAP);
- assert(canBeHoistedTo(LHS, InsertPt) && "must be");
- makeAvailableAt(LHS, InsertPt);
- Result = new ICmpInst(InsertPt, Pred, LHS, NewRHS, "wide.chk");
+ ConstantInt::get((*InsertPt)->getContext(), NewRHSAP);
+ assert(canBeHoistedTo(LHS, *InsertPt) && "must be");
+ makeAvailableAt(LHS, *InsertPt);
+ Result = new ICmpInst(*InsertPt, Pred, LHS, NewRHS, "wide.chk");
}
return Result;
}
@@ -765,16 +769,16 @@ GuardWideningImpl::mergeChecks(SmallVectorImpl<Value *> &ChecksToHoist,
combineRangeChecks(Checks, CombinedChecks)) {
if (InsertPt) {
for (auto &RC : CombinedChecks) {
- makeAvailableAt(RC.getCheckInst(), InsertPt);
+ makeAvailableAt(RC.getCheckInst(), *InsertPt);
if (Result)
Result = BinaryOperator::CreateAnd(RC.getCheckInst(), Result, "",
- InsertPt);
+ *InsertPt);
else
Result = RC.getCheckInst();
}
assert(Result && "Failed to find result value");
Result->setName("wide.chk");
- Result = freezeAndPush(Result, InsertPt);
+ Result = freezeAndPush(Result, *InsertPt);
}
return Result;
}
@@ -786,9 +790,9 @@ GuardWideningImpl::mergeChecks(SmallVectorImpl<Value *> &ChecksToHoist,
Value *GuardWideningImpl::hoistChecks(SmallVectorImpl<Value *> &ChecksToHoist,
Value *OldCondition,
- Instruction *InsertPt) {
+ BasicBlock::iterator InsertPt) {
assert(!ChecksToHoist.empty());
- IRBuilder<> Builder(InsertPt);
+ IRBuilder<> Builder(InsertPt->getParent(), InsertPt);
makeAvailableAt(ChecksToHoist, InsertPt);
makeAvailableAt(OldCondition, InsertPt);
Value *Result = Builder.CreateAnd(ChecksToHoist);
diff --git a/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp b/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
index 41c4d623617347..38104afa7f7837 100644
--- a/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
+++ b/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
@@ -350,18 +350,19 @@ bool IndVarSimplify::handleFloatingPointIV(Loop *L, PHINode *PN) {
IntegerType *Int32Ty = Type::getInt32Ty(PN->getContext());
// Insert new integer induction variable.
- PHINode *NewPHI = PHINode::Create(Int32Ty, 2, PN->getName()+".int", PN);
+ PHINode *NewPHI =
+ PHINode::Create(Int32Ty, 2, PN->getName() + ".int", PN->getIterator());
NewPHI->addIncoming(ConstantInt::get(Int32Ty, InitValue),
PN->getIncomingBlock(IncomingEdge));
Value *NewAdd =
- BinaryOperator::CreateAdd(NewPHI, ConstantInt::get(Int32Ty, IncValue),
- Incr->getName()+".int", Incr);
+ BinaryOperator::CreateAdd(NewPHI, ConstantInt::get(Int32Ty, IncValue),
+ Incr->getName() + ".int", Incr->getIterator());
NewPHI->addIncoming(NewAdd, PN->getIncomingBlock(BackEdge));
- ICmpInst *NewCompare = new ICmpInst(TheBr, NewPred, NewAdd,
- ConstantInt::get(Int32Ty, ExitValue),
- Compare->getName());
+ ICmpInst *NewCompare =
+ new ICmpInst(TheBr->getIterator(), NewPred, NewAdd,
+ ConstantInt::get(Int32Ty, ExitValue), Compare->getName());
// In the following deletions, PN may become dead and may be deleted.
// Use a WeakTrackingVH to observe whether this happens.
@@ -386,7 +387,7 @@ bool IndVarSimplify::handleFloatingPointIV(Loop *L, PHINode *PN) {
// platforms.
if (WeakPH) {
Value *Conv = new SIToFPInst(NewPHI, PN->getType(), "indvar.conv",
- &*PN->getParent()->getFirstInsertionPt());
+ PN->getParent()->getFirstInsertionPt());
PN->replaceAllUsesWith(Conv);
RecursivelyDeleteTriviallyDeadInstructions(PN, TLI, MSSAU.get());
}
@@ -1516,9 +1517,9 @@ bool IndVarSimplify::canonicalizeExitCondition(Loop *L) {
// loop varying work to loop-invariant work.
auto doRotateTransform = [&]() {
assert(ICmp->isUnsigned() && "must have proven unsigned already");
- auto *NewRHS =
- CastInst::Create(Instruction::Trunc, RHS, LHSOp->getType(), "",
- L->getLoopPreheader()->getTerminator());
+ auto *NewRHS = CastInst::Create(
+ Instruction::Trunc, RHS, LHSOp->getType(), "",
+ L->getLoopPreheader()->getTerminator()->getIterator());
ICmp->setOperand(Swapped ? 1 : 0, LHSOp);
ICmp->setOperand(Swapped ? 0 : 1, NewRHS);
if (LHS->use_empty())
diff --git a/llvm/lib/Transforms/Scalar/InferAddressSpaces.cpp b/llvm/lib/Transforms/Scalar/InferAddressSpaces.cpp
index 851eab04c8dbb2..fbefd0e9368b2c 100644
--- a/llvm/lib/Transforms/Scalar/InferAddressSpaces.cpp
+++ b/llvm/lib/Transforms/Scalar/InferAddressSpaces.cpp
@@ -1313,7 +1313,7 @@ bool InferAddressSpacesImpl::rewriteWithNewAddressSpaces(
++InsertPos;
// This instruction may contain multiple uses of V, update them all.
CurUser->replaceUsesOfWith(
- V, new AddrSpaceCastInst(NewV, V->getType(), "", &*InsertPos));
+ V, new AddrSpaceCastInst(NewV, V->getType(), "", InsertPos));
} else {
CurUser->replaceUsesOfWith(
V, ConstantExpr::getAddrSpaceCast(cast<Constant>(NewV),
diff --git a/llvm/lib/Transforms/Scalar/JumpThreading.cpp b/llvm/lib/Transforms/Scalar/JumpThreading.cpp
index 5816bf2a22598d..a04987ce66241b 100644
--- a/llvm/lib/Transforms/Scalar/JumpThreading.cpp
+++ b/llvm/lib/Transforms/Scalar/JumpThreading.cpp
@@ -1037,7 +1037,7 @@ bool JumpThreadingPass::processBlock(BasicBlock *BB) {
LLVM_DEBUG(dbgs() << " In block '" << BB->getName()
<< "' folding undef terminator: " << *BBTerm << '\n');
- BranchInst::Create(BBTerm->getSuccessor(BestSucc), BBTerm);
+ BranchInst::Create(BBTerm->getSuccessor(BestSucc), BBTerm->getIterator());
++NumFolds;
BBTerm->eraseFromParent();
DTU->applyUpdatesPermissive(Updates);
@@ -1202,7 +1202,7 @@ bool JumpThreadingPass::processImpliedCondition(BasicBlock *BB) {
BasicBlock *KeepSucc = BI->getSuccessor(*Implication ? 0 : 1);
BasicBlock *RemoveSucc = BI->getSuccessor(*Implication ? 1 : 0);
RemoveSucc->removePredecessor(BB);
- BranchInst *UncondBI = BranchInst::Create(KeepSucc, BI);
+ BranchInst *UncondBI = BranchInst::Create(KeepSucc, BI->getIterator());
UncondBI->setDebugLoc(BI->getDebugLoc());
++NumFolds;
BI->eraseFromParent();
@@ -1280,7 +1280,7 @@ bool JumpThreadingPass::simplifyPartiallyRedundantLoad(LoadInst *LoadI) {
AvailableVal = PoisonValue::get(LoadI->getType());
if (AvailableVal->getType() != LoadI->getType())
AvailableVal = CastInst::CreateBitOrPointerCast(
- AvailableVal, LoadI->getType(), "", LoadI);
+ AvailableVal, LoadI->getType(), "", LoadI->getIterator());
LoadI->replaceAllUsesWith(AvailableVal);
LoadI->eraseFromParent();
return true;
@@ -1421,7 +1421,7 @@ bool JumpThreadingPass::simplifyPartiallyRedundantLoad(LoadInst *LoadI) {
LoadI->getType(), LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
LoadI->getName() + ".pr", false, LoadI->getAlign(),
LoadI->getOrdering(), LoadI->getSyncScopeID(),
- UnavailablePred->getTerminator());
+ UnavailablePred->getTerminator()->getIterator());
NewVal->setDebugLoc(LoadI->getDebugLoc());
if (AATags)
NewVal->setAAMetadata(AATags);
@@ -1454,8 +1454,8 @@ bool JumpThreadingPass::simplifyPartiallyRedundantLoad(LoadInst *LoadI) {
// predecessor use the same bitcast.
Value *&PredV = I->second;
if (PredV->getType() != LoadI->getType())
- PredV = CastInst::CreateBitOrPointerCast(PredV, LoadI->getType(), "",
- P->getTerminator());
+ PredV = CastInst::CreateBitOrPointerCast(
+ PredV, LoadI->getType(), "", P->getTerminator()->getIterator());
PN->addIncoming(PredV, I->first);
}
@@ -1653,7 +1653,7 @@ bool JumpThreadingPass::processThreadableEdges(Value *Cond, BasicBlock *BB,
// Finally update the terminator.
Instruction *Term = BB->getTerminator();
- BranchInst::Create(OnlyDest, Term);
+ BranchInst::Create(OnlyDest, Term->getIterator());
++NumFolds;
Term->eraseFromParent();
DTU->applyUpdatesPermissive(Updates);
@@ -2971,13 +2971,13 @@ bool JumpThreadingPass::tryToUnfoldSelectInCurrBB(BasicBlock *BB) {
// Expand the select.
Value *Cond = SI->getCondition();
if (!isGuaranteedNotToBeUndefOrPoison(Cond, nullptr, SI))
- Cond = new FreezeInst(Cond, "cond.fr", SI);
+ Cond = new FreezeInst(Cond, "cond.fr", SI->getIterator());
MDNode *BranchWeights = getBranchWeightMDNode(*SI);
Instruction *Term =
SplitBlockAndInsertIfThen(Cond, SI, false, BranchWeights);
BasicBlock *SplitBB = SI->getParent();
BasicBlock *NewBB = Term->getParent();
- PHINode *NewPN = PHINode::Create(SI->getType(), 2, "", SI);
+ PHINode *NewPN = PHINode::Create(SI->getType(), 2, "", SI->getIterator());
NewPN->addIncoming(SI->getTrueValue(), Term->getParent());
NewPN->addIncoming(SI->getFalseValue(), BB);
SI->replaceAllUsesWith(NewPN);
diff --git a/llvm/lib/Transforms/Scalar/LICM.cpp b/llvm/lib/Transforms/Scalar/LICM.cpp
index 546e718cb50851..40bc16f9b57516 100644
--- a/llvm/lib/Transforms/Scalar/LICM.cpp
+++ b/llvm/lib/Transforms/Scalar/LICM.cpp
@@ -2234,7 +2234,7 @@ bool llvm::promoteLoopAccessesToScalars(
if (FoundLoadToPromote || !StoreIsGuanteedToExecute) {
PreheaderLoad =
new LoadInst(AccessTy, SomePtr, SomePtr->getName() + ".promoted",
- Preheader->getTerminator());
+ Preheader->getTerminator()->getIterator());
if (SawUnorderedAtomic)
PreheaderLoad->setOrdering(AtomicOrdering::Unordered);
PreheaderLoad->setAlignment(Alignment);
diff --git a/llvm/lib/Transforms/Scalar/LoopFlatten.cpp b/llvm/lib/Transforms/Scalar/LoopFlatten.cpp
index 3eca1520a6bd55..0e9cf328f149be 100644
--- a/llvm/lib/Transforms/Scalar/LoopFlatten.cpp
+++ b/llvm/lib/Transforms/Scalar/LoopFlatten.cpp
@@ -762,7 +762,7 @@ static bool DoFlattenLoopPair(FlattenInfo &FI, DominatorTree *DT, LoopInfo *LI,
if (!FI.NewTripCount) {
FI.NewTripCount = BinaryOperator::CreateMul(
FI.InnerTripCount, FI.OuterTripCount, "flatten.tripcount",
- FI.OuterLoop->getLoopPreheader()->getTerminator());
+ FI.OuterLoop->getLoopPreheader()->getTerminator()->getIterator());
LLVM_DEBUG(dbgs() << "Created new trip count in preheader: ";
FI.NewTripCount->dump());
}
diff --git a/llvm/lib/Transforms/Scalar/LoopIdiomRecognize.cpp b/llvm/lib/Transforms/Scalar/LoopIdiomRecognize.cpp
index 3721564890ddb4..c7e25c9f3d2c92 100644
--- a/llvm/lib/Transforms/Scalar/LoopIdiomRecognize.cpp
+++ b/llvm/lib/Transforms/Scalar/LoopIdiomRecognize.cpp
@@ -2409,15 +2409,15 @@ bool LoopIdiomRecognize::recognizeShiftUntilBitTest() {
if (!isGuaranteedNotToBeUndefOrPoison(BitPos)) {
// BitMask may be computed from BitPos, Freeze BitPos so we can increase
// it's use count.
- Instruction *InsertPt = nullptr;
+ std::optional<BasicBlock::iterator> InsertPt = std::nullopt;
if (auto *BitPosI = dyn_cast<Instruction>(BitPos))
- InsertPt = &**BitPosI->getInsertionPointAfterDef();
+ InsertPt = BitPosI->getInsertionPointAfterDef();
else
- InsertPt = &*DT->getRoot()->getFirstNonPHIOrDbgOrAlloca();
+ InsertPt = DT->getRoot()->getFirstNonPHIOrDbgOrAlloca();
if (!InsertPt)
return false;
FreezeInst *BitPosFrozen =
- new FreezeInst(BitPos, BitPos->getName() + ".fr", InsertPt);
+ new FreezeInst(BitPos, BitPos->getName() + ".fr", *InsertPt);
BitPos->replaceUsesWithIf(BitPosFrozen, [BitPosFrozen](Use &U) {
return U.getUser() != BitPosFrozen;
});
diff --git a/llvm/lib/Transforms/Scalar/LoopLoadElimination.cpp b/llvm/lib/Transforms/Scalar/LoopLoadElimination.cpp
index 5ec387300aac7f..914cf6e21028ae 100644
--- a/llvm/lib/Transforms/Scalar/LoopLoadElimination.cpp
+++ b/llvm/lib/Transforms/Scalar/LoopLoadElimination.cpp
@@ -440,9 +440,10 @@ class LoadEliminationForLoop {
assert(PH && "Preheader should exist!");
Value *InitialPtr = SEE.expandCodeFor(PtrSCEV->getStart(), Ptr->getType(),
PH->getTerminator());
- Value *Initial = new LoadInst(
- Cand.Load->getType(), InitialPtr, "load_initial",
- /* isVolatile */ false, Cand.Load->getAlign(), PH->getTerminator());
+ Value *Initial =
+ new LoadInst(Cand.Load->getType(), InitialPtr, "load_initial",
+ /* isVolatile */ false, Cand.Load->getAlign(),
+ PH->getTerminator()->getIterator());
PHINode *PHI = PHINode::Create(Initial->getType(), 2, "store_forwarded");
PHI->insertBefore(L->getHeader()->begin());
@@ -458,8 +459,9 @@ class LoadEliminationForLoop {
Value *StoreValue = Cand.Store->getValueOperand();
if (LoadType != StoreType)
- StoreValue = CastInst::CreateBitOrPointerCast(
- StoreValue, LoadType, "store_forward_cast", Cand.Store);
+ StoreValue = CastInst::CreateBitOrPointerCast(StoreValue, LoadType,
+ "store_forward_cast",
+ Cand.Store->getIterator());
PHI->addIncoming(StoreValue, L->getLoopLatch());
diff --git a/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp b/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
index 4f550161148410..4238098181afb9 100644
--- a/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
+++ b/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
@@ -2215,17 +2215,18 @@ void LSRInstance::OptimizeShadowIV() {
// Ignore negative constants, as the code below doesn't handle them
// correctly. TODO: Remove this restriction.
- if (!C->getValue().isStrictlyPositive()) continue;
+ if (!C->getValue().isStrictlyPositive())
+ continue;
/* Add new PHINode. */
- PHINode *NewPH = PHINode::Create(DestTy, 2, "IV.S.", PH);
+ PHINode *NewPH = PHINode::Create(DestTy, 2, "IV.S.", PH->getIterator());
/* create new increment. '++d' in above example. */
Constant *CFP = ConstantFP::get(DestTy, C->getZExtValue());
- BinaryOperator *NewIncr =
- BinaryOperator::Create(Incr->getOpcode() == Instruction::Add ?
- Instruction::FAdd : Instruction::FSub,
- NewPH, CFP, "IV.S.next.", Incr);
+ BinaryOperator *NewIncr = BinaryOperator::Create(
+ Incr->getOpcode() == Instruction::Add ? Instruction::FAdd
+ : Instruction::FSub,
+ NewPH, CFP, "IV.S.next.", Incr->getIterator());
NewPH->addIncoming(NewInit, PH->getIncomingBlock(Entry));
NewPH->addIncoming(NewIncr, PH->getIncomingBlock(Latch));
@@ -2395,8 +2396,8 @@ ICmpInst *LSRInstance::OptimizeMax(ICmpInst *Cond, IVStrideUse* &CondUse) {
// Ok, everything looks ok to change the condition into an SLT or SGE and
// delete the max calculation.
- ICmpInst *NewCond =
- new ICmpInst(Cond, Pred, Cond->getOperand(0), NewRHS, "scmp");
+ ICmpInst *NewCond = new ICmpInst(Cond->getIterator(), Pred,
+ Cond->getOperand(0), NewRHS, "scmp");
// Delete the max calculation instructions.
NewCond->setDebugLoc(Cond->getDebugLoc());
@@ -5532,10 +5533,9 @@ Value *LSRInstance::Expand(const LSRUse &LU, const LSRFixup &LF,
"a scale at the same time!");
if (F.Scale == -1) {
if (ICmpScaledV->getType() != OpTy) {
- Instruction *Cast =
- CastInst::Create(CastInst::getCastOpcode(ICmpScaledV, false,
- OpTy, false),
- ICmpScaledV, OpTy, "tmp", CI);
+ Instruction *Cast = CastInst::Create(
+ CastInst::getCastOpcode(ICmpScaledV, false, OpTy, false),
+ ICmpScaledV, OpTy, "tmp", CI->getIterator());
ICmpScaledV = Cast;
}
CI->setOperand(1, ICmpScaledV);
@@ -5635,11 +5635,10 @@ void LSRInstance::RewriteForPHI(
// If this is reuse-by-noop-cast, insert the noop cast.
Type *OpTy = LF.OperandValToReplace->getType();
if (FullV->getType() != OpTy)
- FullV =
- CastInst::Create(CastInst::getCastOpcode(FullV, false,
- OpTy, false),
- FullV, LF.OperandValToReplace->getType(),
- "tmp", BB->getTerminator());
+ FullV = CastInst::Create(
+ CastInst::getCastOpcode(FullV, false, OpTy, false), FullV,
+ LF.OperandValToReplace->getType(), "tmp",
+ BB->getTerminator()->getIterator());
// If the incoming block for this value is not in the loop, it means the
// current PHI is not in a loop exit, so we must create a LCSSA PHI for
@@ -5711,8 +5710,8 @@ void LSRInstance::Rewrite(const LSRUse &LU, const LSRFixup &LF,
Type *OpTy = LF.OperandValToReplace->getType();
if (FullV->getType() != OpTy) {
Instruction *Cast =
- CastInst::Create(CastInst::getCastOpcode(FullV, false, OpTy, false),
- FullV, OpTy, "tmp", LF.UserInst);
+ CastInst::Create(CastInst::getCastOpcode(FullV, false, OpTy, false),
+ FullV, OpTy, "tmp", LF.UserInst->getIterator());
FullV = Cast;
}
diff --git a/llvm/lib/Transforms/Scalar/NaryReassociate.cpp b/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
index 7fe1a222021ebe..308622615332f8 100644
--- a/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
+++ b/llvm/lib/Transforms/Scalar/NaryReassociate.cpp
@@ -511,10 +511,10 @@ Instruction *NaryReassociatePass::tryReassociatedBinaryOp(const SCEV *LHSExpr,
Instruction *NewI = nullptr;
switch (I->getOpcode()) {
case Instruction::Add:
- NewI = BinaryOperator::CreateAdd(LHS, RHS, "", I);
+ NewI = BinaryOperator::CreateAdd(LHS, RHS, "", I->getIterator());
break;
case Instruction::Mul:
- NewI = BinaryOperator::CreateMul(LHS, RHS, "", I);
+ NewI = BinaryOperator::CreateMul(LHS, RHS, "", I->getIterator());
break;
default:
llvm_unreachable("Unexpected instruction.");
diff --git a/llvm/lib/Transforms/Scalar/NewGVN.cpp b/llvm/lib/Transforms/Scalar/NewGVN.cpp
index 19ac9526b5f88b..9caaf720ec9138 100644
--- a/llvm/lib/Transforms/Scalar/NewGVN.cpp
+++ b/llvm/lib/Transforms/Scalar/NewGVN.cpp
@@ -3721,7 +3721,7 @@ void NewGVN::deleteInstructionsInBlock(BasicBlock *BB) {
new StoreInst(
PoisonValue::get(Int8Ty),
Constant::getNullValue(PointerType::getUnqual(BB->getContext())),
- BB->getTerminator());
+ BB->getTerminator()->getIterator());
}
void NewGVN::markInstructionForDeletion(Instruction *I) {
diff --git a/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp b/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
index 0266eb1a9f502c..436a85f62df681 100644
--- a/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
+++ b/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
@@ -190,7 +190,7 @@ static bool enableBackedgeSafepoints(Function &F);
static bool enableCallSafepoints(Function &F);
static void
-InsertSafepointPoll(Instruction *InsertBefore,
+InsertSafepointPoll(BasicBlock::iterator InsertBefore,
std::vector<CallBase *> &ParsePointsNeeded /*rval*/,
const TargetLibraryInfo &TLI);
@@ -368,7 +368,7 @@ bool PlaceSafepointsPass::runImpl(Function &F, const TargetLibraryInfo &TLI) {
// safepoint polls themselves.
for (Instruction *PollLocation : PollsNeeded) {
std::vector<CallBase *> RuntimeCalls;
- InsertSafepointPoll(PollLocation, RuntimeCalls, TLI);
+ InsertSafepointPoll(PollLocation->getIterator(), RuntimeCalls, TLI);
llvm::append_range(ParsePointNeeded, RuntimeCalls);
}
@@ -619,7 +619,7 @@ static bool enableCallSafepoints(Function &F) { return !NoCall; }
// not handle the parsability of state at the runtime call, that's the
// callers job.
static void
-InsertSafepointPoll(Instruction *InsertBefore,
+InsertSafepointPoll(BasicBlock::iterator InsertBefore,
std::vector<CallBase *> &ParsePointsNeeded /*rval*/,
const TargetLibraryInfo &TLI) {
BasicBlock *OrigBB = InsertBefore->getParent();
diff --git a/llvm/lib/Transforms/Scalar/Reassociate.cpp b/llvm/lib/Transforms/Scalar/Reassociate.cpp
index 61109ed3765987..d91320863e241d 100644
--- a/llvm/lib/Transforms/Scalar/Reassociate.cpp
+++ b/llvm/lib/Transforms/Scalar/Reassociate.cpp
@@ -246,7 +246,8 @@ void ReassociatePass::canonicalizeOperands(Instruction *I) {
}
static BinaryOperator *CreateAdd(Value *S1, Value *S2, const Twine &Name,
- Instruction *InsertBefore, Value *FlagsOp) {
+ BasicBlock::iterator InsertBefore,
+ Value *FlagsOp) {
if (S1->getType()->isIntOrIntVectorTy())
return BinaryOperator::CreateAdd(S1, S2, Name, InsertBefore);
else {
@@ -258,7 +259,8 @@ static BinaryOperator *CreateAdd(Value *S1, Value *S2, const Twine &Name,
}
static BinaryOperator *CreateMul(Value *S1, Value *S2, const Twine &Name,
- Instruction *InsertBefore, Value *FlagsOp) {
+ BasicBlock::iterator InsertBefore,
+ Value *FlagsOp) {
if (S1->getType()->isIntOrIntVectorTy())
return BinaryOperator::CreateMul(S1, S2, Name, InsertBefore);
else {
@@ -291,7 +293,8 @@ static BinaryOperator *LowerNegateToMultiply(Instruction *Neg) {
Constant *NegOne = Ty->isIntOrIntVectorTy() ?
ConstantInt::getAllOnesValue(Ty) : ConstantFP::get(Ty, -1.0);
- BinaryOperator *Res = CreateMul(Neg->getOperand(OpNo), NegOne, "", Neg, Neg);
+ BinaryOperator *Res =
+ CreateMul(Neg->getOperand(OpNo), NegOne, "", Neg->getIterator(), Neg);
Neg->setOperand(OpNo, Constant::getNullValue(Ty)); // Drop use of op.
Res->takeName(Neg);
Neg->replaceAllUsesWith(Res);
@@ -794,8 +797,8 @@ void ReassociatePass::RewriteExprTree(BinaryOperator *I,
BinaryOperator *NewOp;
if (NodesToRewrite.empty()) {
Constant *Undef = UndefValue::get(I->getType());
- NewOp = BinaryOperator::Create(Instruction::BinaryOps(Opcode),
- Undef, Undef, "", I);
+ NewOp = BinaryOperator::Create(Instruction::BinaryOps(Opcode), Undef,
+ Undef, "", I->getIterator());
if (isa<FPMathOperator>(NewOp))
NewOp->setFastMathFlags(I->getFastMathFlags());
} else {
@@ -1046,8 +1049,8 @@ static bool shouldConvertOrWithNoCommonBitsToAdd(Instruction *Or) {
/// transform this into (X+Y) to allow arithmetics reassociation.
static BinaryOperator *convertOrWithNoCommonBitsToAdd(Instruction *Or) {
// Convert an or into an add.
- BinaryOperator *New =
- CreateAdd(Or->getOperand(0), Or->getOperand(1), "", Or, Or);
+ BinaryOperator *New = CreateAdd(Or->getOperand(0), Or->getOperand(1), "",
+ Or->getIterator(), Or);
New->setHasNoSignedWrap();
New->setHasNoUnsignedWrap();
New->takeName(Or);
@@ -1099,7 +1102,8 @@ static BinaryOperator *BreakUpSubtract(Instruction *Sub,
// Calculate the negative value of Operand 1 of the sub instruction,
// and set it as the RHS of the add instruction we just made.
Value *NegVal = NegateValue(Sub->getOperand(1), Sub, ToRedo);
- BinaryOperator *New = CreateAdd(Sub->getOperand(0), NegVal, "", Sub, Sub);
+ BinaryOperator *New =
+ CreateAdd(Sub->getOperand(0), NegVal, "", Sub->getIterator(), Sub);
Sub->setOperand(0, Constant::getNullValue(Sub->getType())); // Drop use of op.
Sub->setOperand(1, Constant::getNullValue(Sub->getType())); // Drop use of op.
New->takeName(Sub);
@@ -1119,8 +1123,8 @@ static BinaryOperator *ConvertShiftToMul(Instruction *Shl) {
auto *SA = cast<ConstantInt>(Shl->getOperand(1));
MulCst = ConstantExpr::getShl(MulCst, SA);
- BinaryOperator *Mul =
- BinaryOperator::CreateMul(Shl->getOperand(0), MulCst, "", Shl);
+ BinaryOperator *Mul = BinaryOperator::CreateMul(Shl->getOperand(0), MulCst,
+ "", Shl->getIterator());
Shl->setOperand(0, PoisonValue::get(Shl->getType())); // Drop use of op.
Mul->takeName(Shl);
@@ -1170,13 +1174,13 @@ static unsigned FindInOperandList(const SmallVectorImpl<ValueEntry> &Ops,
/// Emit a tree of add instructions, summing Ops together
/// and returning the result. Insert the tree before I.
-static Value *EmitAddTreeOfValues(Instruction *I,
+static Value *EmitAddTreeOfValues(BasicBlock::iterator It,
SmallVectorImpl<WeakTrackingVH> &Ops) {
if (Ops.size() == 1) return Ops.back();
Value *V1 = Ops.pop_back_val();
- Value *V2 = EmitAddTreeOfValues(I, Ops);
- return CreateAdd(V2, V1, "reass.add", I, I);
+ Value *V2 = EmitAddTreeOfValues(It, Ops);
+ return CreateAdd(V2, V1, "reass.add", It, &*It);
}
/// If V is an expression tree that is a multiplication sequence,
@@ -1323,7 +1327,7 @@ static Value *OptimizeAndOrXor(unsigned Opcode,
/// instruction. There are two special cases: 1) if the constant operand is 0,
/// it will return NULL. 2) if the constant is ~0, the symbolic operand will
/// be returned.
-static Value *createAndInstr(Instruction *InsertBefore, Value *Opnd,
+static Value *createAndInstr(BasicBlock::iterator InsertBefore, Value *Opnd,
const APInt &ConstOpnd) {
if (ConstOpnd.isZero())
return nullptr;
@@ -1344,7 +1348,7 @@ static Value *createAndInstr(Instruction *InsertBefore, Value *Opnd,
// If it was successful, true is returned, and the "R" and "C" is returned
// via "Res" and "ConstOpnd", respectively; otherwise, false is returned,
// and both "Res" and "ConstOpnd" remain unchanged.
-bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
+bool ReassociatePass::CombineXorOpnd(BasicBlock::iterator It, XorOpnd *Opnd1,
APInt &ConstOpnd, Value *&Res) {
// Xor-Rule 1: (x | c1) ^ c2 = (x | c1) ^ (c1 ^ c1) ^ c2
// = ((x | c1) ^ c1) ^ (c1 ^ c2)
@@ -1361,7 +1365,7 @@ bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
return false;
Value *X = Opnd1->getSymbolicPart();
- Res = createAndInstr(I, X, ~C1);
+ Res = createAndInstr(It, X, ~C1);
// ConstOpnd was C2, now C1 ^ C2.
ConstOpnd ^= C1;
@@ -1378,7 +1382,7 @@ bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
// via "Res" and "ConstOpnd", respectively (If the entire expression is
// evaluated to a constant, the Res is set to NULL); otherwise, false is
// returned, and both "Res" and "ConstOpnd" remain unchanged.
-bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
+bool ReassociatePass::CombineXorOpnd(BasicBlock::iterator It, XorOpnd *Opnd1,
XorOpnd *Opnd2, APInt &ConstOpnd,
Value *&Res) {
Value *X = Opnd1->getSymbolicPart();
@@ -1413,7 +1417,7 @@ bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
return false;
}
- Res = createAndInstr(I, X, C3);
+ Res = createAndInstr(It, X, C3);
ConstOpnd ^= C1;
} else if (Opnd1->isOrExpr()) {
// Xor-Rule 3: (x | c1) ^ (x | c2) = (x & c3) ^ c3 where c3 = c1 ^ c2
@@ -1429,7 +1433,7 @@ bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
return false;
}
- Res = createAndInstr(I, X, C3);
+ Res = createAndInstr(It, X, C3);
ConstOpnd ^= C3;
} else {
// Xor-Rule 4: (x & c1) ^ (x & c2) = (x & (c1^c2))
@@ -1437,7 +1441,7 @@ bool ReassociatePass::CombineXorOpnd(Instruction *I, XorOpnd *Opnd1,
const APInt &C1 = Opnd1->getConstPart();
const APInt &C2 = Opnd2->getConstPart();
APInt C3 = C1 ^ C2;
- Res = createAndInstr(I, X, C3);
+ Res = createAndInstr(It, X, C3);
}
// Put the original operands in the Redo list; hope they will be deleted
@@ -1514,7 +1518,8 @@ Value *ReassociatePass::OptimizeXor(Instruction *I,
Value *CV;
// Step 3.1: Try simplifying "CurrOpnd ^ ConstOpnd"
- if (!ConstOpnd.isZero() && CombineXorOpnd(I, CurrOpnd, ConstOpnd, CV)) {
+ if (!ConstOpnd.isZero() &&
+ CombineXorOpnd(I->getIterator(), CurrOpnd, ConstOpnd, CV)) {
Changed = true;
if (CV)
*CurrOpnd = XorOpnd(CV);
@@ -1531,7 +1536,7 @@ Value *ReassociatePass::OptimizeXor(Instruction *I,
// step 3.2: When previous and current operands share the same symbolic
// value, try to simplify "PrevOpnd ^ CurrOpnd ^ ConstOpnd"
- if (CombineXorOpnd(I, CurrOpnd, PrevOpnd, ConstOpnd, CV)) {
+ if (CombineXorOpnd(I->getIterator(), CurrOpnd, PrevOpnd, ConstOpnd, CV)) {
// Remove previous operand
PrevOpnd->Invalidate();
if (CV) {
@@ -1602,7 +1607,7 @@ Value *ReassociatePass::OptimizeAdd(Instruction *I,
Type *Ty = TheOp->getType();
Constant *C = Ty->isIntOrIntVectorTy() ?
ConstantInt::get(Ty, NumFound) : ConstantFP::get(Ty, NumFound);
- Instruction *Mul = CreateMul(TheOp, C, "factor", I, I);
+ Instruction *Mul = CreateMul(TheOp, C, "factor", I->getIterator(), I);
// Now that we have inserted a multiply, optimize it. This allows us to
// handle cases that require multiple factoring steps, such as this:
@@ -1766,7 +1771,7 @@ Value *ReassociatePass::OptimizeAdd(Instruction *I,
DummyInst->deleteValue();
unsigned NumAddedValues = NewMulOps.size();
- Value *V = EmitAddTreeOfValues(I, NewMulOps);
+ Value *V = EmitAddTreeOfValues(I->getIterator(), NewMulOps);
// Now that we have inserted the add tree, optimize it. This allows us to
// handle cases that require multiple factoring steps, such as this:
@@ -1777,7 +1782,7 @@ Value *ReassociatePass::OptimizeAdd(Instruction *I,
RedoInsts.insert(VI);
// Create the multiply.
- Instruction *V2 = CreateMul(V, MaxOccVal, "reass.mul", I, I);
+ Instruction *V2 = CreateMul(V, MaxOccVal, "reass.mul", I->getIterator(), I);
// Rerun associate on the multiply in case the inner expression turned into
// a multiply. We want to make sure that we keep things in canonical form.
diff --git a/llvm/lib/Transforms/Scalar/RewriteStatepointsForGC.cpp b/llvm/lib/Transforms/Scalar/RewriteStatepointsForGC.cpp
index 45ce3bf3ceae23..330b464667ee46 100644
--- a/llvm/lib/Transforms/Scalar/RewriteStatepointsForGC.cpp
+++ b/llvm/lib/Transforms/Scalar/RewriteStatepointsForGC.cpp
@@ -1143,7 +1143,8 @@ static Value *findBasePointer(Value *I, DefiningValueMapTy &Cache,
assert(Base && "Can't be null");
// The cast is needed since base traversal may strip away bitcasts
if (Base->getType() != Input->getType() && InsertPt)
- Base = new BitCastInst(Base, Input->getType(), "cast", InsertPt);
+ Base = new BitCastInst(Base, Input->getType(), "cast",
+ InsertPt->getIterator());
return Base;
};
@@ -1612,7 +1613,7 @@ class DeferredReplacement {
// Note: we've inserted instructions, so the call to llvm.deoptimize may
// not necessarily be followed by the matching return.
auto *RI = cast<ReturnInst>(OldI->getParent()->getTerminator());
- new UnreachableInst(RI->getContext(), RI);
+ new UnreachableInst(RI->getContext(), RI->getIterator());
RI->eraseFromParent();
}
@@ -1976,7 +1977,7 @@ insertRelocationStores(iterator_range<Value::user_iterator> GCRelocs,
// Emit store into the related alloca.
assert(Relocate->getNextNode() &&
"Should always have one since it's not a terminator");
- new StoreInst(Relocate, Alloca, Relocate->getNextNode());
+ new StoreInst(Relocate, Alloca, std::next(Relocate->getIterator()));
#ifndef NDEBUG
VisitedLiveValues.insert(OriginalValue);
@@ -1999,7 +2000,7 @@ static void insertRematerializationStores(
Value *Alloca = AllocaMap[OriginalValue];
new StoreInst(RematerializedValue, Alloca,
- RematerializedValue->getNextNode());
+ std::next(RematerializedValue->getIterator()));
#ifndef NDEBUG
VisitedLiveValues.insert(OriginalValue);
@@ -2031,9 +2032,9 @@ static void relocationViaAlloca(
// "PromotableAllocas"
const DataLayout &DL = F.getParent()->getDataLayout();
auto emitAllocaFor = [&](Value *LiveValue) {
- AllocaInst *Alloca = new AllocaInst(LiveValue->getType(),
- DL.getAllocaAddrSpace(), "",
- F.getEntryBlock().getFirstNonPHI());
+ AllocaInst *Alloca =
+ new AllocaInst(LiveValue->getType(), DL.getAllocaAddrSpace(), "",
+ F.getEntryBlock().getFirstNonPHIIt());
AllocaMap[LiveValue] = Alloca;
PromotableAllocas.push_back(Alloca);
};
@@ -2100,7 +2101,7 @@ static void relocationViaAlloca(
ToClobber.push_back(Alloca);
}
- auto InsertClobbersAt = [&](Instruction *IP) {
+ auto InsertClobbersAt = [&](BasicBlock::iterator IP) {
for (auto *AI : ToClobber) {
auto AT = AI->getAllocatedType();
Constant *CPN;
@@ -2115,10 +2116,11 @@ static void relocationViaAlloca(
// Insert the clobbering stores. These may get intermixed with the
// gc.results and gc.relocates, but that's fine.
if (auto II = dyn_cast<InvokeInst>(Statepoint)) {
- InsertClobbersAt(&*II->getNormalDest()->getFirstInsertionPt());
- InsertClobbersAt(&*II->getUnwindDest()->getFirstInsertionPt());
+ InsertClobbersAt(II->getNormalDest()->getFirstInsertionPt());
+ InsertClobbersAt(II->getUnwindDest()->getFirstInsertionPt());
} else {
- InsertClobbersAt(cast<Instruction>(Statepoint)->getNextNode());
+ InsertClobbersAt(
+ std::next(cast<Instruction>(Statepoint)->getIterator()));
}
}
}
@@ -2154,15 +2156,15 @@ static void relocationViaAlloca(
PHINode *Phi = cast<PHINode>(Use);
for (unsigned i = 0; i < Phi->getNumIncomingValues(); i++) {
if (Def == Phi->getIncomingValue(i)) {
- LoadInst *Load =
- new LoadInst(Alloca->getAllocatedType(), Alloca, "",
- Phi->getIncomingBlock(i)->getTerminator());
+ LoadInst *Load = new LoadInst(
+ Alloca->getAllocatedType(), Alloca, "",
+ Phi->getIncomingBlock(i)->getTerminator()->getIterator());
Phi->setIncomingValue(i, Load);
}
}
} else {
- LoadInst *Load =
- new LoadInst(Alloca->getAllocatedType(), Alloca, "", Use);
+ LoadInst *Load = new LoadInst(Alloca->getAllocatedType(), Alloca, "",
+ Use->getIterator());
Use->replaceUsesOfWith(Def, Load);
}
}
@@ -2229,16 +2231,16 @@ static void insertUseHolderAfter(CallBase *Call, const ArrayRef<Value *> Values,
if (isa<CallInst>(Call)) {
// For call safepoints insert dummy calls right after safepoint
Holders.push_back(
- CallInst::Create(Func, Values, "", &*++Call->getIterator()));
+ CallInst::Create(Func, Values, "", std::next(Call->getIterator())));
return;
}
// For invoke safepooints insert dummy calls both in normal and
// exceptional destination blocks
auto *II = cast<InvokeInst>(Call);
Holders.push_back(CallInst::Create(
- Func, Values, "", &*II->getNormalDest()->getFirstInsertionPt()));
+ Func, Values, "", II->getNormalDest()->getFirstInsertionPt()));
Holders.push_back(CallInst::Create(
- Func, Values, "", &*II->getUnwindDest()->getFirstInsertionPt()));
+ Func, Values, "", II->getUnwindDest()->getFirstInsertionPt()));
}
static void findLiveReferences(
diff --git a/llvm/lib/Transforms/Scalar/SROA.cpp b/llvm/lib/Transforms/Scalar/SROA.cpp
index b2de2209b3f199..cd81fb702f4d4e 100644
--- a/llvm/lib/Transforms/Scalar/SROA.cpp
+++ b/llvm/lib/Transforms/Scalar/SROA.cpp
@@ -1853,7 +1853,7 @@ static void rewriteMemOpOfSelect(SelectInst &SI, T &I,
Tail->setName(Head->getName() + ".cont");
PHINode *PN;
if (isa<LoadInst>(I))
- PN = PHINode::Create(I.getType(), 2, "", &I);
+ PN = PHINode::Create(I.getType(), 2, "", I.getIterator());
for (BasicBlock *SuccBB : successors(Head)) {
bool IsThen = SuccBB == HeadBI->getSuccessor(0);
int SuccIdx = IsThen ? 0 : 1;
@@ -4883,7 +4883,8 @@ AllocaInst *SROA::rewritePartition(AllocaInst &AI, AllocaSlices &AS,
NewAI = new AllocaInst(
SliceTy, AI.getAddressSpace(), nullptr,
IsUnconstrained ? DL.getPrefTypeAlign(SliceTy) : Alignment,
- AI.getName() + ".sroa." + Twine(P.begin() - AS.begin()), &AI);
+ AI.getName() + ".sroa." + Twine(P.begin() - AS.begin()),
+ AI.getIterator());
// Copy the old AI debug location over to the new one.
NewAI->setDebugLoc(AI.getDebugLoc());
++NumNewAllocas;
diff --git a/llvm/lib/Transforms/Scalar/SeparateConstOffsetFromGEP.cpp b/llvm/lib/Transforms/Scalar/SeparateConstOffsetFromGEP.cpp
index 5124909696aadb..c54a956fc7e243 100644
--- a/llvm/lib/Transforms/Scalar/SeparateConstOffsetFromGEP.cpp
+++ b/llvm/lib/Transforms/Scalar/SeparateConstOffsetFromGEP.cpp
@@ -244,7 +244,7 @@ class ConstantOffsetExtractor {
static int64_t Find(Value *Idx, GetElementPtrInst *GEP);
private:
- ConstantOffsetExtractor(Instruction *InsertionPt)
+ ConstantOffsetExtractor(BasicBlock::iterator InsertionPt)
: IP(InsertionPt), DL(InsertionPt->getModule()->getDataLayout()) {}
/// Searches the expression that computes V for a non-zero constant C s.t.
@@ -332,7 +332,7 @@ class ConstantOffsetExtractor {
SmallVector<CastInst *, 16> ExtInsts;
/// Insertion position of cloned instructions.
- Instruction *IP;
+ BasicBlock::iterator IP;
const DataLayout &DL;
};
@@ -670,7 +670,7 @@ Value *ConstantOffsetExtractor::applyExts(Value *V) {
Instruction *Ext = I->clone();
Ext->setOperand(0, Current);
- Ext->insertBefore(IP);
+ Ext->insertBefore(*IP->getParent(), IP);
Current = Ext;
}
return Current;
@@ -780,7 +780,7 @@ Value *ConstantOffsetExtractor::removeConstOffset(unsigned ChainIndex) {
Value *ConstantOffsetExtractor::Extract(Value *Idx, GetElementPtrInst *GEP,
User *&UserChainTail) {
- ConstantOffsetExtractor Extractor(GEP);
+ ConstantOffsetExtractor Extractor(GEP->getIterator());
// Find a non-zero constant offset first.
APInt ConstantOffset =
Extractor.find(Idx, /* SignExtended */ false, /* ZeroExtended */ false,
@@ -797,7 +797,7 @@ Value *ConstantOffsetExtractor::Extract(Value *Idx, GetElementPtrInst *GEP,
int64_t ConstantOffsetExtractor::Find(Value *Idx, GetElementPtrInst *GEP) {
// If Idx is an index of an inbound GEP, Idx is guaranteed to be non-negative.
- return ConstantOffsetExtractor(GEP)
+ return ConstantOffsetExtractor(GEP->getIterator())
.find(Idx, /* SignExtended */ false, /* ZeroExtended */ false,
GEP->isInBounds())
.getSExtValue();
@@ -813,7 +813,8 @@ bool SeparateConstOffsetFromGEP::canonicalizeArrayIndicesToIndexSize(
// Skip struct member indices which must be i32.
if (GTI.isSequential()) {
if ((*I)->getType() != PtrIdxTy) {
- *I = CastInst::CreateIntegerCast(*I, PtrIdxTy, true, "idxprom", GEP);
+ *I = CastInst::CreateIntegerCast(*I, PtrIdxTy, true, "idxprom",
+ GEP->getIterator());
Changed = true;
}
}
@@ -1249,7 +1250,8 @@ bool SeparateConstOffsetFromGEP::reuniteExts(Instruction *I) {
if (LHS->getType() == RHS->getType()) {
ExprKey Key = createNormalizedCommutablePair(LHS, RHS);
if (auto *Dom = findClosestMatchingDominator(Key, I, DominatingAdds)) {
- Instruction *NewSExt = new SExtInst(Dom, I->getType(), "", I);
+ Instruction *NewSExt =
+ new SExtInst(Dom, I->getType(), "", I->getIterator());
NewSExt->takeName(I);
I->replaceAllUsesWith(NewSExt);
RecursivelyDeleteTriviallyDeadInstructions(I);
@@ -1260,7 +1262,8 @@ bool SeparateConstOffsetFromGEP::reuniteExts(Instruction *I) {
if (LHS->getType() == RHS->getType()) {
if (auto *Dom =
findClosestMatchingDominator({LHS, RHS}, I, DominatingSubs)) {
- Instruction *NewSExt = new SExtInst(Dom, I->getType(), "", I);
+ Instruction *NewSExt =
+ new SExtInst(Dom, I->getType(), "", I->getIterator());
NewSExt->takeName(I);
I->replaceAllUsesWith(NewSExt);
RecursivelyDeleteTriviallyDeadInstructions(I);
diff --git a/llvm/lib/Transforms/Scalar/SimpleLoopUnswitch.cpp b/llvm/lib/Transforms/Scalar/SimpleLoopUnswitch.cpp
index 7eb0ba1c2c1793..3d146086d31af8 100644
--- a/llvm/lib/Transforms/Scalar/SimpleLoopUnswitch.cpp
+++ b/llvm/lib/Transforms/Scalar/SimpleLoopUnswitch.cpp
@@ -2347,8 +2347,7 @@ static void unswitchNontrivialInvariants(
BI->setSuccessor(1 - ClonedSucc, LoopPH);
Value *Cond = skipTrivialSelect(BI->getCondition());
if (InsertFreeze)
- Cond = new FreezeInst(
- Cond, Cond->getName() + ".fr", BI);
+ Cond = new FreezeInst(Cond, Cond->getName() + ".fr", BI->getIterator());
BI->setCondition(Cond);
DTUpdates.push_back({DominatorTree::Insert, SplitBB, ClonedPH});
} else {
@@ -2365,8 +2364,9 @@ static void unswitchNontrivialInvariants(
Case.setSuccessor(ClonedPHs.find(Case.getCaseSuccessor())->second);
if (InsertFreeze)
- SI->setCondition(new FreezeInst(
- SI->getCondition(), SI->getCondition()->getName() + ".fr", SI));
+ SI->setCondition(new FreezeInst(SI->getCondition(),
+ SI->getCondition()->getName() + ".fr",
+ SI->getIterator()));
// We need to use the set to populate domtree updates as even when there
// are multiple cases pointing at the same successor we only want to
@@ -2704,7 +2704,8 @@ static BranchInst *turnSelectIntoBranch(SelectInst *SI, DominatorTree &DT,
if (MSSAU)
MSSAU->moveAllAfterSpliceBlocks(HeadBB, TailBB, SI);
- PHINode *Phi = PHINode::Create(SI->getType(), 2, "unswitched.select", SI);
+ PHINode *Phi =
+ PHINode::Create(SI->getType(), 2, "unswitched.select", SI->getIterator());
Phi->addIncoming(SI->getTrueValue(), ThenBB);
Phi->addIncoming(SI->getFalseValue(), HeadBB);
SI->replaceAllUsesWith(Phi);
@@ -3092,7 +3093,7 @@ injectPendingInvariantConditions(NonTrivialUnswitchCandidate Candidate, Loop &L,
// unswitching will break. Better optimize it away later.
auto *InjectedCond =
ICmpInst::Create(Instruction::ICmp, Pred, LHS, RHS, "injected.cond",
- Preheader->getTerminator());
+ Preheader->getTerminator()->getIterator());
BasicBlock *CheckBlock = BasicBlock::Create(Ctx, BB->getName() + ".check",
BB->getParent(), InLoopSucc);
diff --git a/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp b/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
index c6e8505d5ab4b4..519ff3221a3bc3 100644
--- a/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
+++ b/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
@@ -601,7 +601,7 @@ void TailRecursionEliminator::copyByValueOperandIntoLocalTemp(CallInst *CI,
// Put alloca into the entry block.
Value *NewAlloca = new AllocaInst(
AggTy, DL.getAllocaAddrSpace(), nullptr, Alignment,
- CI->getArgOperand(OpndIdx)->getName(), &*F.getEntryBlock().begin());
+ CI->getArgOperand(OpndIdx)->getName(), F.getEntryBlock().begin());
IRBuilder<> Builder(CI);
Value *Size = Builder.getInt64(DL.getTypeAllocSize(AggTy));
@@ -714,8 +714,9 @@ bool TailRecursionEliminator::eliminateCall(CallInst *CI) {
// We found a return value we want to use, insert a select instruction to
// select it if we don't already know what our return value will be and
// store the result in our return value PHI node.
- SelectInst *SI = SelectInst::Create(
- RetKnownPN, RetPN, Ret->getReturnValue(), "current.ret.tr", Ret);
+ SelectInst *SI =
+ SelectInst::Create(RetKnownPN, RetPN, Ret->getReturnValue(),
+ "current.ret.tr", Ret->getIterator());
RetSelects.push_back(SI);
RetPN->addIncoming(SI, BB);
@@ -728,7 +729,7 @@ bool TailRecursionEliminator::eliminateCall(CallInst *CI) {
// Now that all of the PHI nodes are in place, remove the call and
// ret instructions, replacing them with an unconditional branch.
- BranchInst *NewBI = BranchInst::Create(HeaderBB, Ret);
+ BranchInst *NewBI = BranchInst::Create(HeaderBB, Ret->getIterator());
NewBI->setDebugLoc(CI->getDebugLoc());
Ret->eraseFromParent(); // Remove return.
@@ -787,8 +788,9 @@ void TailRecursionEliminator::cleanupAndFinalize() {
if (!RI)
continue;
- SelectInst *SI = SelectInst::Create(
- RetKnownPN, RetPN, RI->getOperand(0), "current.ret.tr", RI);
+ SelectInst *SI =
+ SelectInst::Create(RetKnownPN, RetPN, RI->getOperand(0),
+ "current.ret.tr", RI->getIterator());
RetSelects.push_back(SI);
RI->setOperand(0, SI);
}
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index 50a073e890626e..edaad4d033bdf0 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -3007,8 +3007,9 @@ PHINode *InnerLoopVectorizer::createInductionResumeValue(
}
// Create phi nodes to merge from the backedge-taken check block.
- PHINode *BCResumeVal = PHINode::Create(OrigPhi->getType(), 3, "bc.resume.val",
- LoopScalarPreHeader->getTerminator());
+ PHINode *BCResumeVal =
+ PHINode::Create(OrigPhi->getType(), 3, "bc.resume.val",
+ LoopScalarPreHeader->getTerminator()->getIterator());
// Copy original phi DL over to the new one.
BCResumeVal->setDebugLoc(OrigPhi->getDebugLoc());
@@ -7415,8 +7416,9 @@ static void createAndCollectMergePhiForReduction(
BasicBlock *LoopScalarPreHeader = OrigLoop->getLoopPreheader();
// Create a phi node that merges control-flow from the backedge-taken check
// block and the middle block.
- auto *BCBlockPhi = PHINode::Create(FinalValue->getType(), 2, "bc.merge.rdx",
- LoopScalarPreHeader->getTerminator());
+ auto *BCBlockPhi =
+ PHINode::Create(FinalValue->getType(), 2, "bc.merge.rdx",
+ LoopScalarPreHeader->getTerminator()->getIterator());
// If we are fixing reductions in the epilogue loop then we should already
// have created a bc.merge.rdx Phi after the main vector body. Ensure that
@@ -9176,14 +9178,14 @@ void VPWidenPointerInductionRecipe::execute(VPTransformState &State) {
// Build a pointer phi
Value *ScalarStartValue = getStartValue()->getLiveInIRValue();
Type *ScStValueType = ScalarStartValue->getType();
- PHINode *NewPointerPhi =
- PHINode::Create(ScStValueType, 2, "pointer.phi", CanonicalIV);
+ PHINode *NewPointerPhi = PHINode::Create(ScStValueType, 2, "pointer.phi",
+ CanonicalIV->getIterator());
BasicBlock *VectorPH = State.CFG.getPreheaderBBFor(this);
NewPointerPhi->addIncoming(ScalarStartValue, VectorPH);
// A pointer induction, performed by using a gep
- Instruction *InductionLoc = &*State.Builder.GetInsertPoint();
+ BasicBlock::iterator InductionLoc = State.Builder.GetInsertPoint();
Value *ScalarStepValue = State.get(getOperand(1), VPIteration(0, 0));
Value *RuntimeVF = getRuntimeVF(State.Builder, PhiType, State.VF);
More information about the llvm-commits
mailing list