[PATCH] Using Masked Load / Store intrinsics in Loop Vectorizer
Nadav Rotem
nrotem at apple.com
Thu Dec 4 08:39:12 PST 2014
Hi Elena,
Thank you for working on this.
+ bool canPredicateStore(Type *DataType, Value *Ptr) {
+ return TTI->isLegalPredicatedStore(DataType, isConsecutivePtr(Ptr));
+ }
+ bool canPredicateLoad(Type *DataType, Value *Ptr) {
+ return TTI->isLegalPredicatedLoad(DataType, isConsecutivePtr(Ptr));
+ }
+ bool setMaskedOp(const Instruction* I) {
+ return (MaskedOp.find(I) != MaskedOp.end());
+ }
private:
Can you please document these functions? The name setMaskedOp is confusing and Doxygen style comments could be useful here.
Thanks,
Nadav
> On Dec 4, 2014, at 6:46 AM, Elena Demikhovsky <elena.demikhovsky at intel.com> wrote:
>
> Hi nadav, aschwaighofer,
>
> The loop vectorizer optimizes loops containing conditional memory accesses by generating masked load and store intrinsics.
> This decision is target dependent.
>
> I already submitted the codegen changes for the intrinsics.
>
> http://reviews.llvm.org/D6527
>
> Files:
> lib/Transforms/Vectorize/LoopVectorize.cpp
> test/Transforms/LoopVectorize/X86/mask1.ll
> test/Transforms/LoopVectorize/X86/mask2.ll
> test/Transforms/LoopVectorize/X86/mask3.ll
> test/Transforms/LoopVectorize/X86/mask4.ll
> <D6527.16924.patch>
More information about the llvm-commits
mailing list