[llvm-dev] LLVM disagrees with GCC on bitfield handling

Bingfeng Mei via llvm-dev llvm-dev at lists.llvm.org
Thu Oct 26 08:50:05 PDT 2017

Resent the message as previous llvm list address  is wrong.

Sorry if this question has been raised in past.  I am running GCC testsuite
for our LLVM port. There are several failures related to bitfields handling
(pr32244-1.c, bitfld-3.c bitfld-5.c, etc) that LLVM disagrees with GCC.
Taking pr32444-1.c as example,

struct foo
  unsigned long long b:40;
} x;

extern void abort (void);

void test1(unsigned long long res)
  /* The shift is carried out in 40 bit precision.  */
  if (x.b<<32 != res)
    abort ();

int main()
  x.b = 0x0100;
  return 0;

The target machine has int  of 32-bit and long long of 64-bit. GCC expects
the arithmetic shift to be performed on 40-bit precision (see above
comment), whereas LLVM first cast the x.b to 64-bit unsigned long long and
do the shift/comparison afterwards.
I checked the standard. It says shift will do integer promotion first,
which doesn't apply because 40-bit > int here,  so it seems to make sense
here with GCC's approach. On the other hand, you can argue when bitfield is
loaded, it is cast to original type first (unsigned long  long here), then
do the arithmetic operation. C standard doesn't define arithmetic on
arbitrary data width. So it needs operate on original data types. I am
confused which approach conforms to standard, or this is just a grey area
not well defined by standard. Any suggestion is greatly appreciated.

Bingfeng Mei
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171026/cec18d8c/attachment.html>

More information about the llvm-dev mailing list