[PATCH] D102437: [LV] NFCI: Move implementation of isScalarWithPredication to LoopVectorizationLegality

Sander de Smalen via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri May 14 04:23:57 PDT 2021


sdesmalen added a comment.

In D102437#2759148 <https://reviews.llvm.org/D102437#2759148>, @fhahn wrote:

> I'd like to understand the main motivation and the reasons why it cannot be done via cost-modeling a bit better. IIUC the main problem is the case where scalable vectorization is forced by the user via pragmas or options? In the auto-vec case, the cost-model would already just skip scalable VFs in those cases as not profitable?
>
> You mention `... we can't yet fall back on scalarization.` From that I gather you plan to add support for that in the future? If we could scalarize arbitrary instructions for scalable vectors, would we need to do the check during the legality checks? If it would be possible to scalarize, even if expensive or hard to estimate, it seems to me that the current code should work without change, right?

That's not really the case. The legalizer will ask if the LV can vectorize the loop. Some scalable operations cannot be legalized, because they'd require scalarization, which doesn't exist and there is no plan to add support for such scalarization, so instead we need to make sure that any IR that's generated can be code-generated. Subsequently, this means that the CostModel should always return a valid cost for scalable operations. This is the conceptual split between legalization and the cost-model. i.e. If something is legal, we must be able to cost it. It is also an extra guard to make sure the cost-model is complete.

This is not the only argument for this patch though. As I tried to explain in the commit message, this refactoring also helps untangle the circular dependence between the [choosing of the] VF from the cost-model, and the decision on whether tail predication is needed. e.g. For a trip-count of 100 and a MaxVF of 8, it would not decide the tail is unnecessary, whereas it could decide per VF whether it needs a tail (e.g. for a VF=4, no tail is needed). So there is a possible improvement here. Having isScalarWithPredication and blockNeedsPredication combine both the "needs tail predication" and "block/instruction needs predication" isn't helpful, hence the reason for splitting the two concerns in this patch.

Note that this same distinction is needed when choosing a scalable VF (we can't decide the tail is unnecessary if we don't know the value of vscale, whereas for a fixed-width VF it would be able to remove the tail, but we can't decide that before calling the cost-model if we don't know whether we choose a fixed-width or scalable VF).


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D102437/new/

https://reviews.llvm.org/D102437



More information about the llvm-commits mailing list