[llvm-dev] LV: predication
Simon Moll via llvm-dev
llvm-dev at lists.llvm.org
Tue May 19 07:07:49 PDT 2020
On 5/19/20 12:38 PM, Sjoerd Meijer wrote:
Hi Simon,
Thanks for reposting the example, and looking at it more carefully, I think it is very similar to my first proposal. This was met with some resistance here because it dumps loop information in the vector preheader. Doing it this early, we want to emit this in the vectoriser, puts a restriction on (future) optimisations that transform vector loops to honour/update/support this intrinsic and loop information. In D79100, it is integral part of the vector body and has some semantics (I will update it today), and thus doesn't have these disadvantages.
The difference is that in the VP version there is an explicit dependence of every vector operation in the loop to the set.num.elements intrinsic. This dependence is obscured in the hwloop proposals (more on that below).
I understand that you are looking to get hwloops working quickly somehow - but any proposal should be designed in a forward-looking way or we could get stuck in a place it's hard to get out of. I am looking forward to see the semantics for this spelled out.
Also, the vectoriser isn't using the VP intrinsics yet, so using them is a bridge too far for me at this point. But we should definitely re-evaluate at some point if we should use or transition to them in our backend passes.
I'd very much like to see LV use VP intrinsics. I invite everybody to collaborate on VP to make it functional and useful quickly! Specifically, i am hoping we can collaborate on masked reduction intrinsics and implement them in the VP namespace. There is also the VP expansion pass on Phabricator right now (D78203 - it says 'work-in-progress' in the summary, which probably was a mistake: this is the real thing).
> Are all vector instructions in the hwloop implicitly predicated or only the masked load/store ops?
In a nutshell, when a vector loop with (explicitly) predicated masked loads/stores hit the backend, we translate the generic intrinsic get.active.mask to a target specific one. All predication remains explicit, and this remains the case. Only at the end, we use this intrinsic to instruction select a specific variant of the hardwarloop with some implicit predication.
I do not see an answer to my question here. If the vectorized loop, prepared for hwloop, looks like this:
%m = get.active.mask(..)
%v = masked.load ... %m
%r = sdiv %x, %y
Will the `sdiv` execute with implicit hwloop predication?
It makes no difference to the semantics of the intrinsic at which point you lower it but how.
- Simon
Cheers,
Sjoerd.
________________________________
From: Simon Moll <Simon.Moll at EMEA.NEC.COM><mailto:Simon.Moll at EMEA.NEC.COM>
Sent: 19 May 2020 09:56
To: Sjoerd Meijer <Sjoerd.Meijer at arm.com><mailto:Sjoerd.Meijer at arm.com>
Cc: Roger Ferrer Ibáñez <rofirrim at gmail.com><mailto:rofirrim at gmail.com>; Eli Friedman <efriedma at quicinc.com><mailto:efriedma at quicinc.com>; listmail at philipreames.com<mailto:listmail at philipreames.com> <listmail at philipreames.com><mailto:listmail at philipreames.com>; llvm-dev <llvm-dev at lists.llvm.org><mailto:llvm-dev at lists.llvm.org>; Sander De Smalen <Sander.DeSmalen at arm.com><mailto:Sander.DeSmalen at arm.com>; hanna.kruppe at gmail.com<mailto:hanna.kruppe at gmail.com> <hanna.kruppe at gmail.com><mailto:hanna.kruppe at gmail.com>
Subject: Re: [llvm-dev] LV: predication
Hi Sjoerd,
On 5/18/20 3:43 PM, Sjoerd Meijer wrote:
> You have similar problems with https://reviews.llvm.org/D79100
The new revision D79100<https://reviews.llvm.org/D79100> solves your comment 1), and I don't think your comments2) and 3) apply as there are no vendor specific intrinsics involved at all here. Just to quickly discuss the optimisation pipeline, D79100<https://reviews.llvm.org/D79100> is a small extension for the vectoriser, and nothing here is related to hardware-loops or target specific constructs. The vectoriser tail-folds the loop, and creates masked load/stores; so existing functionality, and nothing has changed here. The generic hardware loop codegen pass inserts hardware loop intrinsics. Very late in the pipeline, e.g. in the PPC and ARM backends, this is picked and turned into an actual hardwareloop, in our case possibly predicated, or it is reverted.
Thanks for explaining it (possibly once again) I wasn't aware that this will also be used for PPC. Point 3) still stands.
> What will you do if there are no masked intrinsics in the hwloop body?
Nothing. I.e., it can become a hardware loop, but not one with implicit predication.
Are all vector instructions in the hwloop implicitly predicated or only the masked load/store ops? If not, then the issue is that the predicate parameter of masked load/store basically affects the semantics of all other vector ops in the loop that do not have an explicit mask parameter:
%v = masked.load ... %m ; explicit predication - okay
%r = sdiv %x, %y ; implicit predication by %m for hwloops - unpredicated otherwise
> And i am curious why couldn't you use the %evl parameter of VP intrinsics to get the tail predication you are interested in?
In D79100<https://reviews.llvm.org/D79100>, intrinsic get.active.mask makes the backedge taken count of the scalar loop explicit. I will look again, but I don't think the VP intrinsics were able to provide this. But to be honest, I have no preference at all what this intrinsic is, it is not relevant, as long as we can make this explicit.
VP intrinsics explicitly make every vector instruction in the loop dependent on the '%evl'. You would have :
%v = vp.load ... %evl
%r = vp.sdiv %x, %y, %evl ; explicitly predicated by the scalar loop trip count
My previous mail had an example on how %evl could be tied to the scalar trip count. Re-posting that here:
vector.preheader:
%init.evl = i32 llvm.hwloop.set.elements(%n)
vector.body:
%evl = phi 32 [%init.evl, %preheader, %next.evl, vector.body]
%aval = call @llvm.vp.load(Aptr, .., %evl)
call @llvm.vp.store(Bptr, %aval, ..., %evl)
%next.evl = call i32 @llvm.hwloop.decrement(%evl)
- Simon
Cheers.
________________________________
From: Simon Moll <Simon.Moll at EMEA.NEC.COM><mailto:Simon.Moll at EMEA.NEC.COM>
Sent: 18 May 2020 14:11
To: Sjoerd Meijer <Sjoerd.Meijer at arm.com><mailto:Sjoerd.Meijer at arm.com>
Cc: Roger Ferrer Ibáñez <rofirrim at gmail.com><mailto:rofirrim at gmail.com>; Eli Friedman <efriedma at quicinc.com><mailto:efriedma at quicinc.com>; listmail at philipreames.com<mailto:listmail at philipreames.com> <listmail at philipreames.com><mailto:listmail at philipreames.com>; llvm-dev <llvm-dev at lists.llvm.org><mailto:llvm-dev at lists.llvm.org>; Sander De Smalen <Sander.DeSmalen at arm.com><mailto:Sander.DeSmalen at arm.com>; hanna.kruppe at gmail.com<mailto:hanna.kruppe at gmail.com> <hanna.kruppe at gmail.com><mailto:hanna.kruppe at gmail.com>
Subject: Re: [llvm-dev] LV: predication
On 5/18/20 2:53 PM, Sjoerd Meijer wrote:
Hi,
I abandoned that approach and followed Eli's suggestion, see somewhere earlier in this thread, and emit an intrinsic that represents/calculates the active mask. I've just uploaded a new revision for D79100 that implements this.
Cheers.
You have similar problems with https://reviews.llvm.org/D79100
Since there are no masked operations, except for load/store.. how are LLVM optimizations supposed to know that they must not hoist/sink operations with side-effects out of the hwloop? These operations have an implicit dependence on the iteration variable.
What will you do if there are no masked intrinsics in the hwloop body? This can happen once you generate vector code beyond trivial loops or have a vector IR generator other than LV.
And i am curious why couldn't you use the %evl parameter of VP intrinsics to get the tail predication you are interested in?
- Simon
________________________________
From: Simon Moll <Simon.Moll at EMEA.NEC.COM><mailto:Simon.Moll at EMEA.NEC.COM>
Sent: 18 May 2020 13:32
To: Sjoerd Meijer <Sjoerd.Meijer at arm.com><mailto:Sjoerd.Meijer at arm.com>
Cc: Roger Ferrer Ibáñez <rofirrim at gmail.com><mailto:rofirrim at gmail.com>; Eli Friedman <efriedma at quicinc.com><mailto:efriedma at quicinc.com>; listmail at philipreames.com<mailto:listmail at philipreames.com> <listmail at philipreames.com><mailto:listmail at philipreames.com>; llvm-dev <llvm-dev at lists.llvm.org><mailto:llvm-dev at lists.llvm.org>; Sander De Smalen <Sander.DeSmalen at arm.com><mailto:Sander.DeSmalen at arm.com>; hanna.kruppe at gmail.com<mailto:hanna.kruppe at gmail.com> <hanna.kruppe at gmail.com><mailto:hanna.kruppe at gmail.com>
Subject: Re: [llvm-dev] LV: predication
On 5/5/20 12:07 AM, Sjoerd Meijer via llvm-dev wrote:
what we would like to generate is a vector loop with implicit predication, which works by setting up the the number of elements processed by the loop:
hwloop 10
[i:4] = b[i:4] + c[i:4]
Why couldn't you use VP intrinsics and scalable types for this?
%bval = <4 x vscale x double> call @vp.load(..., /* %evl */ 10)
%cval = <4 x vscale x double> call @vp.load(..., /* %evl */ 10)
%sum = <4 x vscale x double> fadd %bval, %cval
store [..]
I see three issues with the llvm.set.loop.elements approach:
1) It is conceptually broken: as others have pointed out, optimization can move the intrinsic around since the intrinsic doesn't have any dependencies that would naturally keep it in place.
2) The whole proposed set of intrinsics is vendor specific: this causes fragmentation and i don't see why we would want to emit vendor-specific intrinsics in a generic optimization pass. Soon, we would see reports a la "your optimization caused regressions for MVE - add a check that the transformation must not touch llvm.set.loop.* or llvm.active.mask intrinsics when compiling for MVE..". I doubt that you would tolerate when that intrinsic were some removed in performance-critical code that would then remain scalar as a result.. so, i do not see the "beauty of the approach".
3) We need a reliable solution to properly support vector ISA such as RISC-V V extension and SX-Aurora and also MVE.. i don't see that reliability in this proposal.
If for whatever reason, the above does not work and seems to far away from your proposal, here is another idea to make more explicit hwloops work with the VP intrinsics - in a way that does not break with optimizations:
vector.preheader:
%evl = i32 llvm.hwloop.set.elements(%n)
vector.body:
%lastevl = phi 32 [%evl, %preheader, %next.evl, vector.body]
%aval = call @llvm.vp.load(Aptr, .., %evl)
call @llvm.vp.store(Bptr, %aval, ..., %evl)
%next.evl = call i32 @llvm.hwloop.decrement(%evl)
Note that the way VP intrinsics are designed, it is not possible to break this code by hoisting the VP calls out of the loop: passing "%evl >= the operation's vector size" consitutes UB (see https://llvm.org/docs/LangRef.html#vector-predication-intrinsics). We can use attributes to do the same for sinking (eg don't move VP across hwloop.decrement).
- Simon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200519/b4538bee/attachment.html>
More information about the llvm-dev
mailing list