[llvm-dev] An update on scalable vectors in LLVM

Vineet Kumar via llvm-dev llvm-dev at lists.llvm.org
Mon Nov 16 01:29:10 PST 2020


Hi All,

@Sander, thanks a lot for the clear and concise summary of the whole 
effort.

>
> On Wed, 11 Nov 2020 at 22:06, Sander De Smalen 
> <Sander.DeSmalen at arm.com <mailto:Sander.DeSmalen at arm.com>> wrote:
>
>     We (Arm) prefer starting out with adding support for 1 in upstream
>     LLVM, because it is the easiest to support and gives a lot of
>     ‘bang for buck’ that will help us incrementally add more scalable
>     auto-vec capabilities to the vectorizer. A proof of concept of
>     what this style of vectorization requires was shared on
>     Phabricator recently: https://reviews.llvm.org/D90343.
>
>     Barcelona Supercomputer Centre shared a proof of concept for style
>     2 that uses the Vector Predication Intrinsics proposed by Simon
>     Moll (VP: https://reviews.llvm.org/D57504, link to the POC:
>     https://repo.hca.bsc.es/gitlab/rferrer/llvm-epi). In the past Arm
>     has shared an alternative implementation of 2 which predates the
>     Vector Predication intrinsics (https://reviews.llvm.org/D87056).
>
>
> I think both are equally good. The third one seems a bit too 
> restrictive to me (but I'm probably missing something).
>
> I have previously recommended (1) for the sake of simplicity in 
> implementation (one step at a time), but I don't see anything wrong in 
> us trying both, even at the same time. Or even a merged way where you 
> first vectorise, then predicate, then fuse the tail.

I should have mentioned this earlier, but our first implementation was 
also the first approach (unpredicated vector body, scalar tail). It gave 
us a good base for implementing the 2nd approach on top, which was 
mostly modifying parts of the existing tail-folding infrastructure and 
use a TTI hook to decide to emit VP intrinsics. It does make a lot of 
sense to start with the first approach in the upstream. It will also let 
everyone get a taste of auto-vectorization  for scalable vectors and 
give us a base for more insightful discussions on the best way to 
support other approaches on top of it.

>
> We have enough interested parties that we can try out multiple 
> solutions and pick the best ones, or all of them. And as you say, 
> they'll all use the same plumbing, so it's more sharing than competing.
>

Thanks and Regards,

Vineet



http://bsc.es/disclaimer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20201116/e3083cc6/attachment.html>


More information about the llvm-dev mailing list