[LLVMdev] Clang and i128

Rotem, Nadav nadav.rotem at intel.com
Tue Apr 24 02:04:21 PDT 2012


Mario, 

The ScalarReplAggregates pass attempts to converts structs into scalars to allow many optimizations.  Try running this pass with a different threshold or try placing a breakpoint on ConvertScalar_ExtractValue and check if you can manually disable some of the transformations in SRA. 

Nadav


-----Original Message-----
From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Mario Schwalbe
Sent: Tuesday, April 24, 2012 11:27
To: llvmdev at cs.uiuc.edu
Subject: [LLVMdev] Clang and i128

Hi all,

I currently use LLVM 3.0 clang to compile my source code to bitcode (on a X86-64 machine) before it is later processed by a pass, like this:

$ clang -m32 -O3 -S foo.c -emit-llvm -o foo.ll

However, for some reason the the resulting module contains 128-bit instructions, e.g.:

%6 = load i8* %arrayidx.1.i, align 1, !tbaa !0
%7 = zext i8 %6 to i128
%8 = shl nuw nsw i128 %7, 8

which the pass can't handle (and never will).

So my question is: Why does it do so? The code doesn't use integer types larger than 32-bit. Is there an option to prevent clang from using those types? If no, which pass might be responsible for this kind of optimization?

Thanks in advance,
Mario
_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
---------------------------------------------------------------------
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.





More information about the llvm-dev mailing list