[PATCH] D70865: [LV] VPValues for memory operation pointers (NFCI)
Ayal Zaks via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Wed Jan 8 23:57:27 PST 2020
Ayal accepted this revision.
Ayal added a comment.
This revision is now accepted and ready to land.
This looks good to me, ship it. Added some final minor optional nits.
`BlockInMask` helps distinguish between the mask of a conditional block and the mask of an interleave group with gaps.
A `VPValue` variable covers all unroll parts, whereas an IR `Value` variable generated for it is per unroll part. So the naming convention can use `Addr`, `BlockInMask` for `VPValue`'s and `AddrPart`, `BlockInMaskPart` for their corresponding IR `Value`'s.
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:2369
if (Decision == LoopVectorizationCostModel::CM_Interleave)
- return vectorizeInterleaveGroup(Instr);
+ return vectorizeInterleaveGroup(Instr, State, Addr, BlockInMask);
----------------
gilr wrote:
> Ayal wrote:
> > Replace with assert? This method should no longer be called to vectorize interleave groups. Unrelated, can be committed separately.
> > This method should no longer be called to vectorize interleave groups
>
> Right.
>
> > Unrelated, can be committed separately.
>
> Agreed.
Can leave behind a FIXME
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:485
- /// Try to vectorize the interleaved access group that \p Instr belongs to,
- /// optionally masking the vector operations if \p BlockInMask is non-null.
- void vectorizeInterleaveGroup(Instruction *Instr,
- VectorParts *BlockInMask = nullptr);
-
- /// Vectorize Load and Store instructions, optionally masking the vector
- /// operations if \p BlockInMask is non-null.
- void vectorizeMemoryInstruction(Instruction *Instr,
- VectorParts *BlockInMask = nullptr);
+ /// Try to vectorize the interleaved access group that \p Instr belongs to
+ /// with the base address given in \p Addr, optionally masking the vector
----------------
("Try to vectorize" - indeed this actually does "try to", as it ends up not vectorizing the interleave group when Instr is not the insert position.)
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:2189
// Prepare for the new pointers.
- setDebugLocFromInst(Builder, Ptr);
- SmallVector<Value *, 2> NewPtrs;
+ SmallVector<Value *, 2> AddrParts;
unsigned Index = Group->getIndex(Instr);
----------------
May be better to place the definition of AddrParts with the comment, next to the loop below that prepares for the new pointers/addresses (Ptr >> Addr).
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:2229
+ unsigned AddressSpace = AddrPart->getType()->getPointerAddressSpace();
+ Type *PtrTy = VecTy->getPointerTo(AddressSpace);
+ AddrParts.push_back(Builder.CreateBitCast(AddrPart, PtrTy));
----------------
PtrTy >> AddrTy?
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:2394
bool isMaskRequired = BlockInMask;
if (isMaskRequired)
+ for (unsigned Part = 0; Part < UF; ++Part)
----------------
Fold as done above to `if (BlockInMask)` ? Independent change, can be done separately.
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:2400
// Calculate the pointer for the specific unroll-part.
GetElementPtrInst *PartPtr = nullptr;
----------------
To match the above naming, both Ptr and PartPtr should be renamed AddrPart? NFC, possibly done separately(?)
================
Comment at: llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:2433
Instruction *NewSI = nullptr;
Value *StoredVal = getOrCreateVectorValue(SI->getValueOperand(), Part);
if (CreateGatherScatter) {
----------------
(In a future step this too should turn into `State.get(Val, Part);`)
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D70865/new/
https://reviews.llvm.org/D70865
More information about the llvm-commits
mailing list