[llvm-dev] [SVE][AArch64] Codegen for a scalable vector splat

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Fri Aug 30 03:25:42 PDT 2019


On Fri, 30 Aug 2019 at 00:35, Amara Emerson via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> IIRC the overall feeling is the same as the other attempts to canonicalize representations. We don’t introduce new representations unless absolutely necessary, and insert+shufflevector is technically sufficient to achieve a splat, even though it looks pretty horrible, bloats the IR etc.

This is a recurrent enough question that perhaps we need to have a
blog post about it. :)

Mainly, the gist is flexibility and maintainability. Optimisations can
be done on standard IR, but new constructs need to be taught to all
passes before they become really useful. By introducing a new node
type for every language/machine concept, we'd have a combinatorial
explosion on the number of conversions to do, as well as lose the
ability to efficiently pattern match to find optimisations.

The side effect is having to be careful (and thus conservative) on
code transformation. You end up with longer and more complicated
def-use chains, which are hard to match, transform, move around,
simplify. But at least, simpler patterns work, and improving a pattern
becomes an incremental change.

The main benefit is being able to have a common infrastructure to do
all of those changes in the right place, and hopefully only once per
stage, at non-combinatorial time complexity.

Hope this helps.

--renato

PS: In this particular case, the ISD node would be temporary and
localised, and at this level, we really don't want code to change
anyway. Very different from IR.


More information about the llvm-dev mailing list