[LLVMdev] x86_fp80, f80, and -m96bit-long-double

Joel E. Denny jdenny at etinternational.com
Mon Nov 1 09:31:55 PDT 2010


Hi,

I am working on x86-64 and am working with a C frontend that folds 
sizeof(long double) into 12.  For now, I am trying to avoid modifying that 
part of the frontend by finding a way to tell LLVM that 12 bytes are 
allocated for x86_fp80.  To explore this, I tried an experiment with 
llvm-gcc:

% llvm-gcc --version | head -1
llvm-gcc (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2.8)
% llvmc --version
Low Level Virtual Machine (http://llvm.org/):
  llvm version 2.8
  Optimized build.
  Built Oct 25 2010 (10:39:46).
  Host: x86_64-unknown-linux-gnu
  Host CPU: corei7

  Registered Targets:
    (none)
% cat > test.c
#include <stdio.h>
#include <stdlib.h>
#define SIZE 5
int
main(void)
{
  long double *a = malloc(sizeof(long double) * SIZE);
  for (int i = 0; i < SIZE; ++i)
    a[i] = i+1;
  for (int i = 0; i < SIZE; ++i)
    printf ("a[%d] = %Lf\n", i, a[i]);
  free (a);
  return 0;
}
% llvm-gcc -std=c99 -m96bit-long-double -emit-llvm -S -o test.ll test.c
% llvmc test.ll
% valgrind ./a.out |& grep Invalid
==3882== Invalid write of size 4
==3882== Invalid write of size 2
==3882== Invalid read of size 4
==3882== Invalid read of size 2

Looking inside test.ll, I see f80:128:128, but I also see sizeof(long 
double)*5 folded into 60.  Changing f80:128:128 to f80:96:96 does not fix 
the errors reported by valgrind.  If I instead fix the folded constant, 
the errors go away, of course.

Does llvm-gcc not intend to support -m96bit-long-double?

Is there any way to instruct LLVM to assume 12 bytes are allocated for 
x86_fp80?  I suppose I could use [12 x i8] and then bitcast to x86_fp80 
when I want to access the value.  Is that the best way to handle this 
(other than changing the way my frontend folds sizeof)?

Thanks.



More information about the llvm-dev mailing list