[PATCH] D102437: [LV] NFCI: Move implementation of isScalarWithPredication to LoopVectorizationLegality

Sander de Smalen via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon May 24 08:32:41 PDT 2021


sdesmalen added a comment.

In D102437#2776569 <https://reviews.llvm.org/D102437#2776569>, @fhahn wrote:

> I am not sure if it was very clear from my earlier comment, I was referring to scalarization + predicated block generation done directly by LV.
>
> For regular vectors, LV does not rely on the backend for scalarization, but it instead creates a loop that iterates over each index for the VF and in the loop conditionally executes the scalar version of the requested instruction and packs the result in a vector. For fixed width VFs the loop is not explicit in the generated IR, because it is just unrolled for the VF. Is there anything fundamental preventing LV to create a similar explicit loop over the scalable VF at runtime? Figuring out an accurate cost would be expensive of course, but I am curious if such scalarization would be possible.
>
> Not sure if I am missing something, but after a quick glance it seems to me that we should have all required pieces in LLVM IR to write such a loop and lower it https://llvm.godbolt.org/z/rhqh9fz8b

Thanks for clarifying. We have prototyped that approach before and our experiments showed it was never beneficial to use scalarization within a vectorized loop with scalable vectors, so we decided to keep the vectorizer simple and disallow scalable auto-vec when scalarization is required. In the case of operations like SDIV, our plan is to emit a `llvm.vp` intrinsic which can then be ISel'ed natively (using a native instruction w/ predication). It seemed better to disallow it first to avoid any vectorization failures, and then bring up the implementation to support this case.

>> Subsequently, this means that the CostModel should always return a valid cost for scalable operations. This is the conceptual split between legalization and the cost-model. i.e. If something is legal, we must be able to cost it. It is also an extra guard to make sure the cost-model is complete.
>
> Agreed. The direction of my comments tries to figure out if there's something LV can do to scalarize arbitrary operations on scalable vectors or not.

Alright, makes sense.

>> This is not the only argument for this patch though. As I tried to explain in the commit message, this refactoring also helps untangle the circular dependence between the [choosing of the] VF from the cost-model, and the decision on whether tail predication is needed. e.g. For a trip-count of 100 and a MaxVF of 8, it would not decide the tail is unnecessary, whereas it could decide per VF whether it needs a tail (e.g. for a VF=4, no tail is needed). So there is a possible improvement here. Having isScalarWithPredication and blockNeedsPredication combine both the "needs tail predication" and "block/instruction needs predication" isn't helpful, hence the reason for splitting the two concerns in this patch.
>
> My comment is only referring to the 'moving to LVL' part of the patch. Not sure if anything bit of the untangling relies on moving the code?

I guess it could equally live in CM, but my reasoning to move it was more that it has nothing to do with the cost-model, so moving it to LVL avoids it getting polluted with other CM-specific code in the future.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D102437/new/

https://reviews.llvm.org/D102437



More information about the llvm-commits mailing list