[llvm-dev] Scalable Vector Types in IR - Next Steps?

Amara Emerson via llvm-dev llvm-dev at lists.llvm.org
Wed Mar 13 11:45:07 PDT 2019


Disclaimer: I’m only speaking for myself, not Apple.

This is really disappointing. Resorting to multi-versioned fixed length vectorization isn’t a solution that’s competitive with the native VLA support, so it doesn’t look like a credible alternative suggestion (at least not without elaborating it on the mailing list). Without a practical alternative, it’s essentially saying “no” to a whole class of vector architectures of which SVE is only one.

Amara

> On Mar 13, 2019, at 9:04 AM, Graham Hunter via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> Hi Renato,
> 
>> It goes without saying that those discussions should have been had in
>> the mailing list, not behind closed doors.
> 
> I have encouraged people to respond on the list or the RFC many times,
> but I've not had much luck in getting people to post even if they
> approve of the idea.
> 
>> Agreeing to implementations
>> in private is asking to get bad reviews in public, as the SVE process
>> has shown *over and over again*.
> 
> There isn't an agreement on the implementation yet; I have posted two
> possibilities and am trying to get consensus on an approach from the
> community.
> 
>>> The basic argument was that they didn't believe the value gained from enabling VLA autovectorization was worth the added complexity in maintaining the codebase. They were open to changing their minds if we could demonstrate sufficient demand for the feature.
>> 
>> In that case, the current patches to change the IR should be
>> abandoned, as well as reverting the previous change to the types, so
>> that we don't carry any unnecessary code forward.
> 
> There's no consensus on supporting the opaque types either yet. Even
> if we do end up going down that route, it could be modified -- as I
> mentioned in my notes, I could introduce a single toplevel type to
> the IR if I stored additional data in it (making it effectively the
> same as the current VectorType, just opaque to existing optimization
> passes), and then would be able to lower directly to the existing
> scalable MVTs we have.
> 
> 
>> The review you sent seems to be a mechanical change to include the
>> intrinsics, but the target lowering change seems to be too small to
>> actually be able to lower anything.
> 
> The new patches are just meant to demonstrate the basics of the opaque
> type to see if there's greater consensus in exploring this approach
> instead of the VLA approach.
> 
>> Without context, it's hard to know what's going on.
> 
> The current state is just what you stated in your initial email in this
> chain; we have a solution that seems to work (in principal) for SVE, RVV,
> and SX-Aurora, but not enough people that care about VLA vectorization
> beyond those groups.
> 
> Given the time constraints, Arm is being pushed to consider a plan B to
> get something working in time for early 2020.
> 
> -Graham
> 
> 
> 
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev



More information about the llvm-dev mailing list