[llvm-dev] [RFC][SVE] Supporting Scalable Vector Architectures in LLVM IR (take 2)

Chris Lattner via llvm-dev llvm-dev at lists.llvm.org
Thu Jul 6 15:13:41 PDT 2017


On Jul 6, 2017, at 3:03 PM, Amara Emerson <amara.emerson at gmail.com> wrote:
>> 1) Almost anything touching (e.g. transforming) vector operations will have to be aware of this concept.  Given a first class implementation of SVE, I don’t see how that’s avoidable though, and your extension of VectorType is sensible.
> 
> Yes, however we have found that the vast majority of vector transforms
> don't need any modification to deal with scalable types. There are
> obviously exceptions, things like analysing shuffle vector masks for
> specific patterns etc.

Ok great.

>> 2) This means that VectorType is sometimes fixed size, and sometime unknowable.  I don’t think we have an existing analog for that in the type system.
>> 
>> Is this type a first class type?  Can you PHI them, can you load/store them, can you pass them as function arguments without limitations?  If not, that is a serious problem.  How does struct layout with a scalable vector in it work?  What does an alloca of one of them look like?  What does a spill look like in codegen?
> Yes, as an extension to VectorType they can be manipulated and passed
> around like normal vectors, load/stored directly, phis, put in llvm
> structs etc. Address computation generates expressions in terms vscale
> and it seems to work well.

Right, that works out through composition, but what does it mean?  I can't have a global variable of a scalable vector type, nor does it make sense for a scalable vector to be embeddable in an LLVM IR struct: nothing that measures the size of a struct is prepared to deal with a non-constant answer.

>>> With a scalable vector type defined, we now need a way to generate addresses for
>>> consecutive vector values in memory and to be able to create basic constant
>>> vector values.
>>> 
>>> For address generation, the `vscale` constant is added to represent the runtime
>>> value of `n` in `<n x m x type>`.
>> 
>> This should probably be an intrinsic, not an llvm::Constant.  The design of llvm::Constant is already wrong: it shouldn’t have operations like divide, and it would be better to not contribute to the problem.
> Could you explain your position more on this? The Constant
> architecture has been a very natural fit for this concept from our
> perspective.

It is appealing, but it is wrong.  Constant should really only model primitive constants (ConstantInt/FP, etc) and we should have one more form for “relocatable” constants.  Instead, we have intertwined constant folding and ConstantExpr logic that doesn’t make sense.

A better pattern to follow are intrinsics like (e.g.) llvm.coro.size.i32(), which always returns a constant value.

>> Ok, that sounds complicated, but can surely be made to work.  The bigger problem is that there are various LLVM IR transformations that want to put registers into memory.  All of these will be broken with this sort of type.
> Could you give an example?

The concept of “reg2mem” is to put SSA values into allocas for passes that can’t (or don’t want to) update SSA.  Similarly, function body extraction can turn SSA values into parameters, and depending on the implementation can pack them into structs.  The coroutine logic similarly needs to store registers if they cross suspend points, there are multiple other examples.

-Chris



More information about the llvm-dev mailing list