[llvm-dev] [RFC] Vector Predication
Robin Kruppe via llvm-dev
llvm-dev at lists.llvm.org
Mon Feb 4 12:18:36 PST 2019
On Mon, 4 Feb 2019 at 18:15, David Greene via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Simon Moll <moll at cs.uni-saarland.de> writes:
>
> > You are referring to the sub-vector sizes, if i am understanding
> > correctly. I'd assume that the mask sub-vector length always has to be
> > either 1 or the same as the data sub-vector length. For example, this
> > is ok:
> >
> > %result = call <scalable 3 x float> @llvm.evl.fsub.v4f32(<scalable 3 x
> > float> %x, <scalable 3 x float> %y, <scalable 1 x i1> %M, i32 %L)
>
> What does <scalable 1 x i1> applied to <scalable 3 x float> mean? I
> would expect a requirement of <scalable 3 x i1>. At least that's how I
> understood the SVE proposal [1]. The n's in <scalable n x type> have to
> match.
>
I believe the idea is to allow each single mask bit to control multiple
consecutive lanes at once, effectively interpreting the vector being
operated on as "many short fixed-length vectors, concatenated" rather than
a single long vector of scalars. This is a different interpretation of that
type than usual, but it's not crazy, e.g. a similar reinterpretation of
vector types seems to be the favored approach for adding matrix operations
to LLVM IR. It somewhat obscures the point to discuss this only for
scalable vectors, there's no conceptual reason why one couldn't do the same
with fixed size vectors.
In fact, I would recommend against making almost any new feature or
intrinsic exclusive to scalable vectors, including this one: there
shouldn't be much extra code required to allow and support it, and not
doing so makes the IR less orthogonal. For example, if a <scalable 4 x
float> fadd with a <scalable 1 x i1> mask works, then <4 x float> fadd with
a <1 x i1> mask, a <8 x float> fadd with a <2 x i1> mask, etc. should also
be possible overloads of the same intrinsic.
So far, so good. A bit odd, when I think about it, but if hardware out
there has that capability, maybe this is a good way to encode it in IR
(other options might work too, though). The crux, however, is the
interaction with the dynamic vector length: is it in terms of the mask? the
longer data vector? if the latter, what happens if it isn't divisible by
the mask length? There are multiple options and it's not clear to me which
one is "the right one", both for architectures with native support
(hopefully the one brough up here won't be the only one) and for internal
consistency of the IR. If there was an established architecture with this
kind of feature where people have gathered lots of practical experience
with it, we could use that inform the decision (just as we have for
ordinary predication and dynamic vector length). But I'm not aware of any
architecture that does this other than the one Jacob and lkcl are working
on, and as far as I know their project still in the early stages.
Cheers,
Robin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190204/2ec0c964/attachment.html>
More information about the llvm-dev
mailing list