[llvm-dev] Adding support for vscale

Luke Kenneth Casson Leighton via llvm-dev llvm-dev at lists.llvm.org
Tue Oct 1 20:09:33 PDT 2019

On Wednesday, October 2, 2019, Sander De Smalen <Sander.DeSmalen at arm.com>

It was definitely not my intention to be non-inclusive, my apologies if
> that seemed the case!

No problem Sander.

> > can i therefore recommend a change, here:
> > [...]
> > "This patch adds vscale as a symbolic constant to the IR, similar to
> > undef and zeroinitializer, so that vscale - representing the
> > runtime-detected "element processing" capacity - can be used in
> > constant expressions"
> Thanks for the suggestion! I like the use of the word `capacity`
> especially now that the term 'vector length' has overloaded meanings.
> I'll add some extra words to the vscale patch to clarify its meaning.

super. will keep an eye out for it.

> > my only concern would be: some circumstances (some algorithms) may
> > perform better with MMX, some with SSE, some with different levels of
> > performance on e.g. AMD or Intel, which would, with benchmarking, show
> > that some algorithms perform better if vscale=8 (resulting in some
> > other MMX/SSE subset being utilised) than if vscale=16.
> If fixed-width/short vectors are more beneficial for some algorithm, I'd
> recommend using fixed-width vectors directly. It would be up to the target
> to lower that to the vector instruction set. For AArch64, this can be done
> using Neon (max 128bits) or with SVE/SVE2 using a 'fixed-width' predicate
> mask, e.g. vl4 for a predicate of 4 elements, even when the vector capacity
> is larger than 4.

I have a feeling that this was - is - the "workaround" that Graham was
referring to.

> > would it be reasonable to assume that predication *always* is to be
> > used in combination with vscale?  or is it the intention to
> > [eventually] be able to auto-generate the kinds of [painful in
> > retrospect] SIMD assembly shown in the above article?
> When the size of a vector is constant throughout the program, but unknown
> at compile-time, then some form of masking would be required for loads and
> stores (or other instructions that may cause an exception). So it is
> reasonable to assume that predication is used for such vectors.
> >> This model would be complementary to `vscale`, as it still requires the
> >> same scalable vector type to describe a vector of unknown size.
> >
> > ah.  that's where the assumption breaks down, because of SV allowing
> > its vectors to "sit" on top of the *actual* scalar regfile(s), we do
> > in fact permit an [immediate-specified] vscale to be set, arbitrarily,
> > at any time.
> Maybe I'm missing something here, but if SV uses an immediate to define
> vscale, that implies the value of vscale is known at compile-time and thus
> regular (fixed-width) vector types can be used?

It's not really intended to be exposed to frontends except by #pragma or
inline assembly.

We *can* set an immediate however by doing so we hard-code the allocated
maximum number of scalar regs to be utilised.

If that is too many then register spill might occur (with disastrous
penalties for 3D) and if too small then performance is poor as ALUs sit

In addition SV works on RV32 and RV64 where the regfiles are half the
number of total bits and consequently we really will need dynamic scaling,
there, in order to halve the size of vectors rather than risk register

Plus, if people reeeeeaaally want to not have 128 registers, which there
may be a genuine market need particularly in 3D Embedded, they might
consider the cost of 128 regs to be too great, use the "normal" 32 of RISCV

Here they would definitely want vscale=1 and to do everything as close to
scalar operation as possible. If they have vec4 datatypes (using SUBVL)
they might end up with regspill but that is a price they pay for the
decision to reduce the regfile size.

(btw SUBVL is a multiplier of length 2, 3 or 4, representing vec2-4
identical to RVV's subvector.

This is explicitly used in the (c/c++) source code, where MVL immidiates
and VL lengths definitely are not)

> > now, we mmmiiiight be able to get away with assuming that vscale is
> > equal to the absolute maximum possible setting (64 for RV64, 32 for
> > RV32), then use / play-with the "runtime active VL get/set"
> > intrinsics.
> >
> > i'm kiinda wary of saying "absolutely yes that's the way forward" for
> > us, particularly without some input from Jacob here.
> Note that there isn't a requirement to use `vscale` as proposed in my
> first patch.

Oh? Ah! That is an important detail :)

One that is tough to express in a short introduction in the docstring
without going into too much detail.

If RV only cares about the runtime active-VL then some explicit, separate
> mechanism to get/set the active VL would be needed anyway. I imagine the
> resulting runtime value (instead of `vscale`) to then be used in loop
> indvar updates, address computations, etc.

Ok this might be the GetOutOfJailFree card I was looking for :)

My general feeling on this then is that both RVV and SV should avoid using

In the case of RVV, MVL is a hardware defined constant that is never
*intended* to be known by applications. There's no published detection
mechanism.  Loops are supposed to be designed to run a few more times on
lower spec'd hardware.

Robin, what's your thoughts there?

SV it looks like we will need to do something like <%reg x 4 x f32> which
has an analysis pass to process it, calculating the total number of
available regs for a given block, isolated by LD and ST boundaries, and
maximise %reg to not spill.

> ok, a link to that would be handy... let me see if i can find it...
> > what comes up is this: https://reviews.llvm.org/D57504 is that right?
> Yes, that's the one!

Super, encountered it a few months back will read again.


crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191002/cdb4020e/attachment.html>

More information about the llvm-dev mailing list