[llvm-dev] LV: predication

Sjoerd Meijer via llvm-dev llvm-dev at lists.llvm.org
Mon May 4 03:04:58 PDT 2020


> The harm comes if the intrinsic ends up with the wrong value, or attached to the wrong loop.

The intrinsic is marked as IntrNoDuplicate, so I wasn't worried about it ending up somewhere else. Also, it is a property of a specific loop, a tail-folded vector loop, that holds even after it is transformed I think. I.e. unrolling a vector loop is probably not what you want, but even if you do the element count would remain the same. But yes, I agree that a future whacky optimisation on vector loops could invalidate this, which you can then skip but then you lose out on it.... So, I really like this:

> If the problem is specifically figuring out the underlying element count given a predicate, maybe we could attack it from that angle?  For example, introduce a special intrinsic for deriving the mask (sort of like the SVE whilelo).

That would be an excellent way of doing it and it would also map very well to MVE too, where we have a VCTP intrinsic/instruction that creates the mask/predicate (Vector Create Tail-Predicate). So I will go for this approach. Such an intrinsic was actually also proposed in Sam's original RFC (see https://lists.llvm.org/pipermail/llvm-dev/2019-May/132512.html), but we hadn't implemented it yet. This intrinsic will probably look something like this:

    <N x i1> @llvm.loop.get.active.mask(AnyInt, AnyInt)

It produces a <N x i1> predicate based on its two arguments, the number of elements and the vector trip count, and it will be used by the predicated masked loads/stores instructions in the vector body. I will start drafting an implementation for this and continue with this in D79100.

Thanks,
Sjoerd.


________________________________
From: Eli Friedman <efriedma at quicinc.com>
Sent: 01 May 2020 21:11
To: Sjoerd Meijer <Sjoerd.Meijer at arm.com>; llvm-dev <llvm-dev at lists.llvm.org>
Subject: RE: [llvm-dev] LV: predication






From: Sjoerd Meijer <Sjoerd.Meijer at arm.com>
Sent: Friday, May 1, 2020 11:54 AM
To: Eli Friedman <efriedma at quicinc.com>; llvm-dev <llvm-dev at lists.llvm.org>
Subject: [EXT] Re: [llvm-dev] LV: predication



Hi Eli,



> The problem with your proposal, as written, is that the vectorizer is producing the intrinsic.  Because we don’t impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change.  This is completely impractical because the intrinsic isn’t related to anything optimizations would normally look for: it’s a random intrinsic in the middle of nowhere.



I do see that point. But is that also not the beauty of it? It just sits in the preheader, if gets removed, then so be it. And if it not recognised, then also no harm done?



The harm comes if the intrinsic ends up with the wrong value, or attached to the wrong loop.



> Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop.



This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too. All we need is the information that the vectoriser already has, and pass this on somehow.



As I am really keen to simply our backend pass, would there be another way to pass this information on? If emitting an intrinsic is a blocker, could this be done with a loop annotation?



If the problem is specifically figuring out the underlying element count given a predicate, maybe we could attack it from that angle?  For example, introduce a special intrinsic for deriving the mask (sort of like the SVE whilelo).



-Eli


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200504/84451aaf/attachment.html>


More information about the llvm-dev mailing list