[llvm-dev] [RFC] Vector/SIMD ISA Context Abstraction
Renato Golin via llvm-dev
llvm-dev at lists.llvm.org
Tue Aug 3 07:19:29 PDT 2021
On Sat, 31 Jul 2021 at 00:33, Luke Kenneth Casson Leighton via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> if however instead of an NxM problem this was turned into N+M,
> separating out "scalar base" from "augmentation" throughout the IR,
> the problem disappears entirely.
>
Hi Luke,
It's not entirely clear to me what you are suggesting here.
For context:
* Historically, we have tried to keep as many instructions as native IR as
possible to avoid the explosion of intrinsics, as you describe.
* However, traditionally, intrinsics reduce the number of instructions in
a basic block instead of increasing them, so there's always the balance.
* For example, some reduction intrinsics were added to address bloat, but
no target is forced to use them.
* If you can represent the operation as a series of native IR
instructions, by all means, you should do so.
I get it that a lot of intrinsics are repeated patterns over all variations
and that most targets don't have that many, so it's "ok".
I also get it that most SIMD vector operations aren't intrinsically vector,
but expansions of scalar operations for the benefit of vectorisation (plus
predication, to avoid undefined behaviour and to allow "funny" patterns,
etc).
But it's not clear to me what the "augmentation" part would be in other
targets.
even permute / shuffle Vector/SIMD operations are separateable into
> "base" and "abstract Vector Concept": the "base" operation in that
> case being "MV.X" (scalar register copy, indexable - reg[RT] =
> reg[reg[RA]] and immediate variant reg[RT] = reg[RA+imm])
>
Shuffles are already represented as IR instructions (insert/extract
vector), so I'm not sure this clarifies much.
Have you looked at the current scalable vector implementation?
It allows a set of operations on open-ended vectors that are controlled by
a predicate, which is possibly the "augmentation" that you're looking for?
> the issue is that this is a massive intrusive change, effectively a
> low-level redesign of LLVM IR internals for every single back-end.
>
Not necessarily.
For example, scalable vectors are being introduced in a way that
non-scalable back-ends (mostly) won't notice.
And it's not just adding a few intrinsics, the very concept of vectors was
changed.
There could be a (set of) construct(s) for your particular back-end that is
invisible to others.
Of course, the more invisible things, the harder it is to validate and
change intersections of code, so the change must really be worth the extra
hassle.
With both Arm and RISCV implementing scalable extensions, that change was
deemed worthy and work is progressing.
So, if you could leverage the existing code to your advantage, you'd avoid
having to convince a huge community to implement a large breaking change.
And you'd also give us one more reason for the scalable extension to exist.
:)
Hope this helps.
cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210803/1c8e4595/attachment-0001.html>
More information about the llvm-dev
mailing list