[llvm-dev] Does it make sense to upstream some MVT's?

Serge Pavlov via llvm-dev llvm-dev at lists.llvm.org
Wed Jan 17 06:19:36 PST 2018


Hi all,

Progress in machine learning gives rise to many cores designed for this
task. They tend to have wide registers, I know about a core that operates
vector types up to 2K bytes. Support of wide vector types off the shelf can
facilitate compiler development in such cases, because adding new types is
not merely several lines in ValueTypes.td.

Thanks,
--Serge

2018-01-17 14:13 GMT+07:00 Martin J. O'Riordan via llvm-dev <
llvm-dev at lists.llvm.org>:

> Hi Sean,
>
>
>
> I had to add ‘v16f16’ to our out-of-tree target, and this was to
> primarily to allow me to express lowering for all the OpenCL types (well,
> except for the ‘v3T’ types).
>
>
>
> The trend does seem to be towards larger bit-width SIMD registers, and as
> you say this will increase in time; but perhaps instead of using a discrete
> enumeration combined with additional entries in several switch-statements,
> it might be better to rethink MVTs using templates so that they can be
> instanced automatically as needed by a target.  That might be one way of
> avoiding the problem of having either a sparse population of MVTs as needed
> by the sum of all in-tree targets, an on the other-hand the bloat of
> expressing all possible combinations.
>
>
>
> How does LLVM handle 2D vectors/matrices?  I haven’t moved on to v6.0.0
> yet, but so far as I can tell v5.0.x only abstracts 1D vectors: N-elements
> of M-bits, and having types like ‘v256i16’ is not quite the same as
> having support for let’s say ‘v16x16i16’.  Having a high-level
> abstraction for reasoning about NxN-elements of M-bits would be really
> useful/cool, especially for exotic instructions with special register
> allocation requirements, and for classic nested loops such as convolutions.
>
>
>
>             MartinO
>
>
>
> *From:* llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] *On Behalf Of *Sean
> Silva via llvm-dev
> *Sent:* 17 January 2018 02:58
> *To:* llvm-dev <llvm-dev at lists.llvm.org>
> *Subject:* [llvm-dev] Does it make sense to upstream some MVT's?
>
>
>
> Hi,
>
>
>
> Our backend for Pixel Visual Core uses some MVT's that aren't upstream.
> Does it make sense to upstream them? I figure that as architectures get
> wider, we'll eventually have "all" possible combinations of widths and
> types, but on the other hand having code that isn't used by current
> backends in tree isn't great.
>
>
>
> These are the MVT's that we have added:
>
>
>
> 16x16 element (2D SIMD) 1-bit predicate registers:
>
> v256i1
>
>
>
> 16x16 element (2D SIMD) 16-bit registers:
>
> v256i16
>
>
>
> 20x20 element (2D SIMD) 16-bit registers: (we round up to v512 instead of
> v400):
>
> v512i16
>
>
>
> 32-bit versions of the above 16-bit registers (to represent 32-bit
> accumulators for MAD instructions and also dual-issue "wide" instructions
> to the dual non-MAD ALU's in each lane)
>
> v256i32
>
> v512i32
>
>
>
>
>
> For those interested in more details about Pixel Visual Core, the 6th
> edition of Hennessy and Patterson's "Computer Architecture: A Quantitative
> Approach" http://a.co/et2K1xk has a section about it (Section 7.7 pg
> 579-592). I'll bring my copy to the next Bay Area LLVM Social if folks want
> to take a look.
>
>
>
> -- Sean Silva
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180117/031a3680/attachment.html>


More information about the llvm-dev mailing list