[LLVMdev] BITS_BIG_ENDIAN in llvm-gcc
Jay Foad
jay.foad at gmail.com
Tue Jul 22 03:06:35 PDT 2008
There are various functions in llvm-convert.cpp that depend on BITS_BIG_ENDIAN:
TreeToLLVM::EmitLoadOfLValue()
TreeToLLVM::EmitMODIFY_EXPR()
InsertBitFieldValue()
ProcessBitFieldInitialization()
The comments say things like:
// If this target has bitfields laid out in big-endian order, invert the bit
// in the word if needed.
// If this is a big-endian bit-field, take the top NumBitsToInsert
// bits from the bitfield value.
// If this is little-endian bit-field, take the bottom NumBitsToInsert
// bits from the bitfield value.
But as I understand it in GCC, BITS_BIG_ENDIAN has nothing to do with
the way bit-fields are laid out. All it affects is how the operands of
the sign_extract and zero_extract RTXs are interpreted. This is what
the GCC internals manual says, and I believe it's true from looking at
the GCC source code.
So can anyone explain why llvm-gcc depends on BITS_BIG_ENDIAN in this way?
Thanks,
Jay.
More information about the llvm-dev
mailing list