[llvm-commits] [llvm] r97064 - in /llvm/trunk: include/llvm/Target/TargetData.h lib/CodeGen/SelectionDAG/LegalizeDAG.cpp lib/CodeGen/SelectionDAG/LegalizeTypes.cpp lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp lib/Target/TargetData.cpp test/CodeGen/X86/vector-of-i1.ll
Duncan Sands
baldrick at free.fr
Thu Feb 25 02:59:26 PST 2010
Hi Dan,
> Make getTypeSizeInBits work correctly for array types; it should return
> the number of value bits, not the number of bits of allocation for in-memory
> storage.
>
> Make getTypeStoreSize and getTypeAllocSize work consistently for arrays and
> vectors.
>
> Fix several places in CodeGen which compute offsets into in-memory vectors
> to use TargetData information.
>
> This fixes PR1784.
thanks for doing this - it is an improvement, but unfortunately it does not
really solve PR1784. In fact it doesn't address the most problematic issues
at all.
For example, bitcasting between vectors and integers. A bitcast of <8xi1>
to i8 is currently legal. You have decided that <8 x i1> is 8 bytes long,
the same as [8 x i1], and is stored as 8 consecutive bytes. This breaks the
invariant that a bitcast from type S to type T is the same as storing an S
to memory and loading it out again as a T. This is a pretty fundamental
invariant to break! OK, suppose you decide that you don't want to break that
invariant - you might want to say bitcast from <8xi1> to i8 is illegal, but
a bitcast from <8xi1> to i64 is legal (same bitwidth). But then the legality
or not of a bitcast becomes target dependent, which is also a big no no!
As far as I can see, there are three possibilities:
(1) Drop the invariant that bitcast from S to T is the same as "store to
memory as type S, reload from memory as type T". This is what you chose.
I don't think Chris will like this :)
(2) Disallow bitcast of vectors.
(3) Store vectors to memory in a target independent way. This is the
opposite approach to the one you took.
Personally I don't much like (1) ;)
Another issue is that VectorType has a method called getBitWidth. This is
defined as:
inline unsigned getBitWidth() const {
return NumElements * getElementType()->getPrimitiveSizeInBits();
}
After your changes, most uses of this method are bogus as far as I can see.
Actually, they were mostly bogus before your changes too!
> /// getTypeStoreSize - Return the maximum number of bytes that may be
> /// overwritten by storing the specified type. For example, returns 5
> /// for i36 and 10 for x86_fp80.
> - uint64_t getTypeStoreSize(const Type *Ty) const {
> - return (getTypeSizeInBits(Ty)+7)/8;
> - }
> + uint64_t getTypeStoreSize(const Type *Ty) const;
It doesn't make any sense to change getTypeStoreSizeInBits without changing
getTypeStoreSizeInBitsInBits, since they are the same method only one returns
in bits, the other rounds up to bytes. I think you should revert the change
to getTypeStoreSize, and modify getTypeStoreSizeInBitsInBits instead.
> /// getTypeStoreSizeInBits - Return the maximum number of bits that may be
> /// overwritten by storing the specified type; always a multiple of 8. For
> @@ -208,10 +206,7 @@
> /// of the specified type, including alignment padding. This is the amount
> /// that alloca reserves for this type. For example, returns 12 or 16 for
> /// x86_fp80, depending on alignment.
> - uint64_t getTypeAllocSize(const Type* Ty) const {
> - // Round up to the next alignment boundary.
> - return RoundUpAlignment(getTypeStoreSize(Ty), getABITypeAlignment(Ty));
> - }
> + uint64_t getTypeAllocSize(const Type* Ty) const;
Likewise, it makes no sense to modify getTypeAllocSize without modifying
getTypeAllocSizeInBits: they are the same method, however one returns the
result in bits, the other rounded up to bytes. I think you should revert
the change to getTypeAllocSize, and modify getTypeAllocSizeInBits instead.
In fact as far as I can see there is no need to change this method at all
(see below) since your new logic always gives the same result as the old
logic.
> --- llvm/trunk/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp (original)
> +++ llvm/trunk/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp Wed Feb 24 16:05:23 2010
> @@ -660,7 +660,8 @@
> unsigned CastOpc = IdxVT.bitsGT(PtrVT) ? ISD::TRUNCATE : ISD::ZERO_EXTEND;
> Tmp3 = DAG.getNode(CastOpc, dl, PtrVT, Tmp3);
> // Add the offset to the index.
> - unsigned EltSize = EltVT.getSizeInBits()/8;
> + unsigned EltSize = TLI.getTargetData()->
> + getTypeAllocSize(EltVT.getTypeForEVT(*DAG.getContext()));
Please introduce a convenience method for getting the alloc size for a value
type, and use that instead. By the way, the codegen logic for vector bitcast
is still broken, but that's because vector bitcast is broken in the IR already.
> --- llvm/trunk/lib/Target/TargetData.cpp (original)
> +++ llvm/trunk/lib/Target/TargetData.cpp Wed Feb 24 16:05:23 2010
> @@ -455,7 +455,7 @@
> return getPointerSizeInBits();
> case Type::ArrayTyID: {
> const ArrayType *ATy = cast<ArrayType>(Ty);
> - return getTypeAllocSizeInBits(ATy->getElementType())*ATy->getNumElements();
> + return getTypeSizeInBits(ATy->getElementType())*ATy->getNumElements();
This change is dangerous. You just changed the size in bits of [N x i1] to N,
when previously it was probably 8*N. Did you audit all the places that make
use of this method?
> +/// getTypeStoreSize - Return the maximum number of bytes that may be
> +/// overwritten by storing the specified type. For example, returns 5
> +/// for i36 and 10 for x86_fp80.
The changes to this method are ok, but should have been made to
getTypeStoreSizeInBits.
> +uint64_t TargetData::getTypeStoreSize(const Type *Ty) const {
> + // Arrays and vectors are allocated as sequences of elements.
How they are allocated is not relevant here. How about this instead:
// Arrays and vectors are stored as sequences of elements.
> +/// getTypeAllocSize - Return the offset in bytes between successive objects
> +/// of the specified type, including alignment padding. This is the amount
> +/// that alloca reserves for this type. For example, returns 12 or 16 for
> +/// x86_fp80, depending on alignment.
> +uint64_t TargetData::getTypeAllocSize(const Type* Ty) const {
> + // Arrays and vectors are allocated as sequences of elements.
> + // Note that this means that things like vectors-of-i1 are not bit-packed
> + // in memory (except on a hypothetical bit-addressable machine). If
> + // someone builds hardware with native vector-of-i1 stores and the idiom
> + // of bitcasting vectors to integers in order to bitpack them for storage
> + // isn't sufficient, TargetData may need new "size" concept.
> + if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty))
> + return getTypeAllocSize(ATy->getElementType()) * ATy->getNumElements();
> + if (const VectorType *VTy = dyn_cast<VectorType>(Ty))
> + return getTypeAllocSize(VTy->getElementType()) * VTy->getNumElements();
> +
> + // Round up to the next alignment boundary.
> + return RoundUpAlignment(getTypeStoreSize(Ty), getABITypeAlignment(Ty));
> +}
This change is pointless since it doesn't actually change anything - the
previous logic (round up the store size by the alignment) would give exactly
the same result. Please revert this part.
Ciao,
Duncan.
More information about the llvm-commits
mailing list