[llvm-dev] Scalable Vector Types in IR - Next Steps?

Graham Hunter via llvm-dev llvm-dev at lists.llvm.org
Thu Mar 14 03:45:35 PDT 2019


Thanks for the support.

To clarify, Arm would very much prefer to proceed with the full scalable
IR type proposal, but we're facing time pressure now.

We would like to be able to reach consensus on an approach around the end
of EuroLLVM this year so that we can begin a full implementation.

The opaque type patches were only intended to show how the third party proposal
might look; I agree it should be closer to the scalable IR proposal. The
two main points that (imo) would make it easier to switch later are:
  - Embedding the element type and minimum length, which copies the basic
    semantics of VectorType
  - Serializing in the same way we do in the scalable IR proposal
    (to a '<scalable n x ty>'). We should then just be able to switch
    the Types used by the IR reader and writer.

-Graham


> On 13 Mar 2019, at 23:02, Renato Golin <rengolin at gmail.com> wrote:
> 
> Agreed with both! 
> 
> Furthermore, any temporary solution will have to be very similar to what we expect to see natively, or the transition to native may never happen. 
> 
> On Wed, 13 Mar 2019, 18:55 Finkel, Hal J., <hfinkel at anl.gov> wrote:
> On 3/13/19 1:45 PM, Amara Emerson via llvm-dev wrote:
> > Disclaimer: I’m only speaking for myself, not Apple.
> >
> > This is really disappointing. Resorting to multi-versioned fixed length vectorization isn’t a solution that’s competitive with the native VLA support, so it doesn’t look like a credible alternative suggestion (at least not without elaborating it on the mailing list). Without a practical alternative, it’s essentially saying “no” to a whole class of vector architectures of which SVE is only one.
> 
> 
> To the extent that this alternative direction represents an exploration
> so that we can all evaluate in a more-informed manner, I think that is
> valuable. However, let me agree with Amara, I prefer the original
> approach. Among many other advantages, users will expect the compiler to
> perform arithmetic optimizations on VLA operations (e.g., InstCombines),
> and if we can't reuse the existing logic for this purpose, we'll end up
> with an inferior result.
> 
> Thanks again,
> 
> Hal
> 
> 
> >
> > Amara
> >
> >> On Mar 13, 2019, at 9:04 AM, Graham Hunter via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> >>
> >> Hi Renato,
> >>
> >>> It goes without saying that those discussions should have been had in
> >>> the mailing list, not behind closed doors.
> >> I have encouraged people to respond on the list or the RFC many times,
> >> but I've not had much luck in getting people to post even if they
> >> approve of the idea.
> >>
> >>> Agreeing to implementations
> >>> in private is asking to get bad reviews in public, as the SVE process
> >>> has shown *over and over again*.
> >> There isn't an agreement on the implementation yet; I have posted two
> >> possibilities and am trying to get consensus on an approach from the
> >> community.
> >>
> >>>> The basic argument was that they didn't believe the value gained from enabling VLA autovectorization was worth the added complexity in maintaining the codebase. They were open to changing their minds if we could demonstrate sufficient demand for the feature.
> >>> In that case, the current patches to change the IR should be
> >>> abandoned, as well as reverting the previous change to the types, so
> >>> that we don't carry any unnecessary code forward.
> >> There's no consensus on supporting the opaque types either yet. Even
> >> if we do end up going down that route, it could be modified -- as I
> >> mentioned in my notes, I could introduce a single toplevel type to
> >> the IR if I stored additional data in it (making it effectively the
> >> same as the current VectorType, just opaque to existing optimization
> >> passes), and then would be able to lower directly to the existing
> >> scalable MVTs we have.
> >>
> >>
> >>> The review you sent seems to be a mechanical change to include the
> >>> intrinsics, but the target lowering change seems to be too small to
> >>> actually be able to lower anything.
> >> The new patches are just meant to demonstrate the basics of the opaque
> >> type to see if there's greater consensus in exploring this approach
> >> instead of the VLA approach.
> >>
> >>> Without context, it's hard to know what's going on.
> >> The current state is just what you stated in your initial email in this
> >> chain; we have a solution that seems to work (in principal) for SVE, RVV,
> >> and SX-Aurora, but not enough people that care about VLA vectorization
> >> beyond those groups.
> >>
> >> Given the time constraints, Arm is being pushed to consider a plan B to
> >> get something working in time for early 2020.
> >>
> >> -Graham
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> llvm-dev at lists.llvm.org
> >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> > _______________________________________________
> > LLVM Developers mailing list
> > llvm-dev at lists.llvm.org
> > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> 
> -- 
> Hal Finkel
> Lead, Compiler Technology and Programming Languages
> Leadership Computing Facility
> Argonne National Laboratory
> 



More information about the llvm-dev mailing list