[llvm-commits] PATCH: remove VICmp and VFCmp.

David Greene dag at cray.com
Thu Jul 9 10:39:43 PDT 2009


On Thursday 09 July 2009 03:15, Duncan Sands wrote:
> Hi Dave,
>
> > I haven't been tracking this, so can someone explain what the "vectors of
> > i1 problem" is?  Is it a problem of how to generate code on architectures
> > that don't have a mask concept?
>
> how is <8 x i1> stored in memory?  8 bits packed into a byte, or like an
> array [8 x i1], which on most machines is stored as 8 bytes?  I think it
> is clear that it should be stored the same as an array would be.

That's not at all clear to me.  A vector mask is a bit-packed vector of
i1 and we're going to need some way of representing that in the LLVM
IR (Instructions, SDNodes and MachineInstructions).

> bits packed into an i8.  In general, LLVM thinks that <N x Ty> occupies
> N * size_of_Ty_in_bits bits.  This causes all kinds of problems.
> Consider for example a vector of two x86 long double.  These are 10
> bytes long, the vector is thus 20 bytes long.  This means that the
> second long double will be stored 10 bytes offset from the start.  Thus
> it will not be aligned.  Etc etc.

So won't legalize fix that when it realizes a vector of long double is
an illegal type on x86?  If, say, the frontend sees a struct with an
array of two long doubles in it, it would be incorrect to translate that
to a vector of 2 long doubles in LLVM, wouldn't it?  At least given the
current semantics of vector.  If we change the semantics then of course
all bets are off.  But we definitely need some way of expressing bit-packed
information that naturally operates as a vector.

                                    -Dave



More information about the llvm-commits mailing list