[llvm-commits] PATCH: remove VICmp and VFCmp.

Duncan Sands baldrick at free.fr
Thu Jul 9 01:15:59 PDT 2009


Hi Dave,

> I haven't been tracking this, so can someone explain what the "vectors of
> i1 problem" is?  Is it a problem of how to generate code on architectures
> that don't have a mask concept?

how is <8 x i1> stored in memory?  8 bits packed into a byte, or like an
array [8 x i1], which on most machines is stored as 8 bytes?  I think it
is clear that it should be stored the same as an array would be.

However large parts of LLVM assume that <8 x i1> is bit-packed.  For
example, you can bitcast <8 x i1> to i8.  There is a general rule that
a bitcast of Ty1 to Ty2 is the same as storing a Ty1 and loading it out
as a Ty2.  Thus we can deduce that <8 x i1> is stored in memory as 8
bits packed into an i8.  In general, LLVM thinks that <N x Ty> occupies
N * size_of_Ty_in_bits bits.  This causes all kinds of problems.
Consider for example a vector of two x86 long double.  These are 10
bytes long, the vector is thus 20 bytes long.  This means that the
second long double will be stored 10 bytes offset from the start.  Thus
it will not be aligned.  Etc etc.

Basically VectorType::getBitWidth is completely bogus (IMHO).  Instead
I think for memory layout vectors should be the same as arrays.  This
means they may have holes (eg: vectors of x86 long double) and that you
need target data to calculate the size.  This may be problematic because
you can no longer tell whether a bitcast is valid or not without target
data.  By the way, I'm pretty sure that gcc lays out vectors of x86 long
double the same as an array.

Ciao,

Duncan.



More information about the llvm-commits mailing list