[llvm-commits] PATCH: remove VICmp and VFCmp.

Duncan Sands baldrick at free.fr
Thu Jul 9 11:07:11 PDT 2009


Hi David,

>>> I haven't been tracking this, so can someone explain what the "vectors of
>>> i1 problem" is?  Is it a problem of how to generate code on architectures
>>> that don't have a mask concept?
>> how is <8 x i1> stored in memory?  8 bits packed into a byte, or like an
>> array [8 x i1], which on most machines is stored as 8 bytes?  I think it
>> is clear that it should be stored the same as an array would be.
> 
> That's not at all clear to me.  A vector mask is a bit-packed vector of
> i1 and we're going to need some way of representing that in the LLVM
> IR (Instructions, SDNodes and MachineInstructions).

actual usage is of course a consideration.  However if you want all
vectors to be bit packed then this means that vectors of x86 long
double will be bit-packed too.  GCC supports vectors of long double
and it doesn't seem to bit-pack them...  Of course such GCC vectors
can be thunked into non-vectors when converting to LLVM.  Bitpacking
also makes codegen more complicated.

>> bits packed into an i8.  In general, LLVM thinks that <N x Ty> occupies
>> N * size_of_Ty_in_bits bits.  This causes all kinds of problems.
>> Consider for example a vector of two x86 long double.  These are 10
>> bytes long, the vector is thus 20 bytes long.  This means that the
>> second long double will be stored 10 bytes offset from the start.  Thus
>> it will not be aligned.  Etc etc.
> 
> So won't legalize fix that when it realizes a vector of long double is
> an illegal type on x86?

I guess the second long double could be stored via a bunch of (aligned)
integer operations, but that would be pretty sucky.

If, say, the frontend sees a struct with an
> array of two long doubles in it, it would be incorrect to translate that
> to a vector of 2 long doubles in LLVM, wouldn't it?

It doesn't do that translation.  But the gcc front-end also has a notion
of vector which currently maps on LLVM's vectors.

At least given the
> current semantics of vector.  If we change the semantics then of course
> all bets are off.  But we definitely need some way of expressing bit-packed
> information that naturally operates as a vector.

So the processor really operates with such bit-packed vectors, producing
them as results for comparisons?

Anyway, some kind of definitive decision needs to be made about how
vectors are to be represented in memory.  Presumably Chris should
pronounce on this from on-high :)

Ciao,

Duncan.



More information about the llvm-commits mailing list