[PATCH] D50480: [LV] Vectorizing loops of arbitrary trip count without remainder under opt for size

Ayal Zaks via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Aug 20 15:08:47 PDT 2018


Ayal marked 4 inline comments as done.
Ayal added inline comments.


================
Comment at: include/llvm/Transforms/Vectorize/LoopVectorizationLegality.h:485
   SmallPtrSet<const Instruction *, 8> MaskedOp;
+
+  /// All blocks of loop are to be masked to fold tail of scalar iterations.
----------------
hsaito wrote:
> I think it's best not to keep this state in the Legal. From the Legal perspective, being able to vectorize the whole loop body under the mask and the actual decision to do so are completely separate issues. 
> 
> Since canFold...() is invoked by CostModel::computeMaxVF, we should be able to keep this state in the CostModel. After all, whether to bail out or continue under FoldTailByMasking is a cost model side of the state, after consulting the Legal.
OK.


================
Comment at: lib/Transforms/Vectorize/LoopVectorizationLegality.cpp:792
 
 bool LoopVectorizationLegality::blockNeedsPredication(BasicBlock *BB) {
+  return (FoldTailByMasking ||
----------------
hsaito wrote:
> By moving FoldTail state to CostModel,  we can define CostModel::blockNeedsPredication(BB) as FoldTailByMasking || LAI::blockNeedsPredication(BB) and make Legal version static to Legal.
> 
OK, except that LAI::blockNeedsPredication() also asks for DT which CostModel does not have. Let's have CostModel::blockNeedsPredication(BB) return FoldTailByMasking || Legal::blockNeedsPredication(). Hopefully the two will not cause confusion.

Making Legal version static should be pursued in a separate patch, if desired.


================
Comment at: lib/Transforms/Vectorize/VPlan.h:609
   /// VPlan opcodes, extending LLVM IR with idiomatics instructions.
-  enum { Not = Instruction::OtherOpsEnd + 1 };
+  enum { Not = Instruction::OtherOpsEnd + 1, ICmpULE };
 
----------------
dcaballe wrote:
> hsaito wrote:
> > Ayal wrote:
> > > dcaballe wrote:
> > > > hsaito wrote:
> > > > > Ayal wrote:
> > > > > > dcaballe wrote:
> > > > > > > I'm worried that this new opcode could be problematic since now we can have compare instructions represented as VPInstructions with Instruction::ICmp and Instruction::FCmp opcodes and VPInstructions with VPInstruction::ICmpULE. Internally, we have a VPCmpInst subclass to model I/FCmp opcodes and their predicates. Do you think it would be better to upstream that subclass first? 
> > > > > > An alternative of leveraging `Instruction::ICmp` opcode and existing `ICmpInst` subclasses for keeping the Predicate, in a scalable way, could be (devised jointly w/ Gil):
> > > > > > 
> > > > > > ```
> > > > > > +    // Introduce the early-exit compare IV <= BTC to form header block mask.
> > > > > > +    // This is used instead of IV < TC because TC may wrap, unlike BTC.
> > > > > > +    VPValue *IV = Plan->getVPValue(Legal->getPrimaryInduction());
> > > > > > +    VPValue *BTC = Plan->getBackedgeTakenCount();
> > > > > > +    Value *Undef = UndefValue::get(Legal->getPrimaryInduction()->getType());
> > > > > > +    auto *ICmp = new ICmpInst(ICmpInst::ICMP_ULE, Undef, Undef);
> > > > > > +    Plan->addDetachedValue(ICmp);
> > > > > > +    BlockMask = Builder.createNaryOp(Instruction::ICmp, {IV, BTC}, ICmp);
> > > > > >      return BlockMaskCache[BB] = BlockMask;
> > > > > > ```
> > > > > > 
> > > > > > and then have `VPInstruction::generateInstruction()` do
> > > > > > 
> > > > > > ```
> > > > > > +  case Instruction::ICmp: {
> > > > > > +    Value *IV = State.get(getOperand(0), Part);
> > > > > > +    Value *TC = State.get(getOperand(1), Part);
> > > > > > +    auto *ICmp = cast<ICmpInst>(getUnderlyingValue());
> > > > > > +    Value *V = Builder.CreateICmp(ICmp->getPredicate(), IV, TC);
> > > > > > +    State.set(this, V, Part);
> > > > > > +    break;
> > > > > > +  }
> > > > > > ```
> > > > > > 
> > > > > > where `VPlan::addDetachedValue()` is used for disposal purposes only. This has a minor (acceptable?) impact on the underlying IR: it creates/adds-users to `UndefValue`'s.
> > > > > Pros/cons are easier to discuss with the code in hand. Diego, would you be able to upload the subclassing in Phabricator?
> > > > > 
> > > > > The alternative by Ayal/Gil works only because the VPlan modeling is done very late in the vectorization process. That'll make it very hard to move the modeling towards the beginning of vectorization. Please don't do that.
> > > > > 
> > > > > My preference is to be able to templatize VPInstruction and Instruction as much as feasible. Is that easier with subclassing? 
> > > > Yes, I also feel that opening this door could be problematic in the long term. Let me see if I can quickly post the subclass in Phabricator so that we can see which changes are necessary in other places.
> > > > 
> > > > > My preference is to be able to templatize VPInstruction and Instruction as much as feasible. Is that easier with subclassing?
> > > > 
> > > > The closer the class hierarchies are, the easier will be.
> > > Extensions of VPInstructions such as VPCmpInst should indeed be uploaded for review and deserve a separate discussion thread and justification. This patch could tentatively make use of it, though for the purpose of this patch an ICmpULE opcode or a detached ICmpInst suffice. An ICmpULE opcode shouldn't be problematic **currently**, as this early-exit is the only VPInstruction compare with a Predicate, right? Note that detached UnderlyingValues could serve as **data containers** for all fields already implemented in the IR hierarchy, and could be constructed at any point of VPlan construction for that purpose. Extending VPInstructions to provide a similar **API** as that of IR Instructions seems to be an orthogonal concern with its own design objectives, and can coexist with detached Values; e.g., a VPCmpInst could hold its Predicate using a detached ICmpInst/FCmpInst.
> > I go against detached ICmpInst. We'll be moving VPlan modeling before the cost model and creating an IR Instruction before deciding to vectorize is against the VPlan concept.
> > 
> > >seems to be an orthogonal concern with its own design objectives
> > 
> > Not quite. We'd like VPInstruction as easy to use to many LLVM developers and that is an integral part of design/implementation from the beginning.
> > 
> > Having said that, new opcode versus VPCmpInst doesn't block the rest of the review. Other parts of the review should proceed while opcode versus VPCmpInst discussion is in progress on the side.
> I created D50823 with the VPCmpInst sub-class so that we can make a decision with the code in place.
VPlans should indeed keep the existing IR intact w/o changing it, as they are tentative by design, and also by current implementation. But creating a detached IR Instruction, just for the purpose of holding its attributes, w/o connecting it to any User, Operand (except Undef's) or BasicBlock, is arguably keeping the existing IR intact.  Doing so should be quite familiar to LLVM developers, avoids mirroring Instruction's class hierarchy or a subset thereof, and leverages the existing UnderlyingValue pointer that is unutilized by InnerLoopVectorizer. Next uploaded version provides this complete option.

Having said that, this patch can surely work with a VP(I)CmpInst just as well, as it merely needs a way for a single compare VPInstruction to hold a single Predicate, and print its name.


================
Comment at: test/Transforms/LoopVectorize/X86/optsize.ll:12
+; CHECK-LABEL: @foo_optsize(
+; CHECK: x i8>
+
----------------
Ayal wrote:
> reames wrote:
> > Testing wise, expanding out the IR generated w/update-lit-checks and landing the tests without the changes and then rebasing on top would make it much easier to follow the transform being described for those us not already expert in the vectorizer code structures.  I get that your following existing practice, but this might be one of the cases which justify changing existing practice in the area.  :)
> Agreed. The original target-independent version of optsize.ll still passes, BTW, (i.e., fails to vectorize), but due to cost-model considerations rather than scalar tail considerations.
Expanded IR CHECKs have been added for cases that should get vectorized. For cases that should not, suffice to check that no vector is formed.


https://reviews.llvm.org/D50480





More information about the llvm-commits mailing list