[llvm-dev] An update on scalable vectors in LLVM

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Thu Nov 12 02:54:41 PST 2020


Hi Sander,

Awesome work from everyone involved. Thank you very much for your efforts!

I know some people wanted it to go a lot faster than it did, but now we
have an infrastructure that has reached consensus across different
companies and industries.

We're finally discussing high level vectorisation strategies without having
to worry about the mechanics of scalable vector representation. This is a
big long term win.

On Wed, 11 Nov 2020 at 22:06, Sander De Smalen <Sander.DeSmalen at arm.com>
wrote:

> We (Arm) prefer starting out with adding support for 1 in upstream LLVM,
> because it is the easiest to support and gives a lot of ‘bang for buck’
> that will help us incrementally add more scalable auto-vec capabilities to
> the vectorizer. A proof of concept of what this style of vectorization
> requires was shared on Phabricator recently:
> https://reviews.llvm.org/D90343.
>
> Barcelona Supercomputer Centre shared a proof of concept for style 2 that
> uses the Vector Predication Intrinsics proposed by Simon Moll (VP:
> https://reviews.llvm.org/D57504, link to the POC:
> https://repo.hca.bsc.es/gitlab/rferrer/llvm-epi). In the past Arm has
> shared an alternative implementation of 2 which predates the Vector
> Predication intrinsics (https://reviews.llvm.org/D87056).
>

I think both are equally good. The third one seems a bit too restrictive to
me (but I'm probably missing something).

I have previously recommended (1) for the sake of simplicity in
implementation (one step at a time), but I don't see anything wrong in us
trying both, even at the same time. Or even a merged way where you first
vectorise, then predicate, then fuse the tail.

We have enough interested parties that we can try out multiple solutions
and pick the best ones, or all of them. And as you say, they'll all use the
same plumbing, so it's more sharing than competing.

Hopefully in a couple of months we’ll be able to slowly enable more
> scalable vectorization and work towards building LNT with scalable vectors
> enabled. When that becomes sufficiently stable, we can consider gearing up
> a BuildBot to help guard any new changes we make for scalable vectors.
>

This would be great, even before it's enabled by default.

cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20201112/e76776b4/attachment.html>


More information about the llvm-dev mailing list