[LLVMdev] Clang and i128

Mario Schwalbe mario at se.inf.tu-dresden.de
Tue Apr 24 01:27:20 PDT 2012


Hi all,

I currently use LLVM 3.0 clang to compile my source code to bitcode
(on a X86-64 machine) before it is later processed by a pass, like this:

$ clang -m32 -O3 -S foo.c -emit-llvm -o foo.ll

However, for some reason the the resulting module contains 128-bit
instructions, e.g.:

%6 = load i8* %arrayidx.1.i, align 1, !tbaa !0
%7 = zext i8 %6 to i128
%8 = shl nuw nsw i128 %7, 8

which the pass can't handle (and never will).

So my question is: Why does it do so? The code doesn't use integer types
larger than 32-bit. Is there an option to prevent clang from using
those types? If no, which pass might be responsible for this kind of
optimization?

Thanks in advance,
Mario



More information about the llvm-dev mailing list