[llvm] [VPlan] Model address separately. (PR #72164)
via llvm-commits
llvm-commits at lists.llvm.org
Sun Dec 17 13:44:04 PST 2023
================
@@ -397,6 +399,50 @@ Value *VPInstruction::generateInstruction(VPTransformState &State,
Builder.GetInsertBlock()->getTerminator()->eraseFromParent();
return CondBr;
}
+ case VPInstruction::VectorPtr:
+ case VPInstruction::VectorPtrReverse: {
+ // Calculate the pointer for the specific unroll-part.
+ Value *PartPtr = nullptr;
+ bool IsReverse = getOpcode() == VPInstruction::VectorPtrReverse;
+ auto *MemR = cast<VPWidenMemoryInstructionRecipe>(*user_begin());
+ Type *ScalarDataTy =
+ MemR->isStore() ? cast<StoreInst>(&MemR->getIngredient())
+ ->getValueOperand()
+ ->getType()
+ : cast<LoadInst>(&MemR->getIngredient())->getType();
+ // Use i32 for the gep index type when the value is constant,
+ // or query DataLayout for a more suitable index type otherwise.
+ const DataLayout &DL =
+ Builder.GetInsertBlock()->getModule()->getDataLayout();
+ Type *IndexTy = State.VF.isScalable() && (IsReverse || Part > 0)
+ ? DL.getIndexType(ScalarDataTy->getPointerTo())
+ : Builder.getInt32Ty();
+ Value *Ptr = State.get(getOperand(0), VPIteration(0, 0));
+ bool InBounds = false;
+ if (auto *GEP = dyn_cast<GetElementPtrInst>(Ptr->stripPointerCasts()))
+ InBounds = GEP->isInBounds();
+ if (IsReverse) {
+ // If the address is consecutive but reversed, then the
+ // wide store needs to start at the last vector element.
+ // RunTimeVF = VScale * VF.getKnownMinValue()
+ // For fixed-width VScale is 1, then RunTimeVF = VF.getKnownMinValue()
+ Value *RunTimeVF = getRuntimeVF(Builder, IndexTy, State.VF);
----------------
ayalz wrote:
Follow-up should probably use the modelling of VF as VPValue, following the modelling of VFxUF, coupled with unrolling-by-UF as a VPlan-to-VPlan transformation.
https://github.com/llvm/llvm-project/pull/72164
More information about the llvm-commits
mailing list