[LLVMbugs] [Bug 6394] Optimizer swaps bitcast and getelementptr in an invalid way

bugzilla-daemon at llvm.org bugzilla-daemon at llvm.org
Sat Jan 1 15:49:22 PST 2011


http://llvm.org/bugs/show_bug.cgi?id=6394

Henning Thielemann <llvm at henning-thielemann.de> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |REOPENED
            Version|2.6                         |2.8
         Resolution|FIXED                       |

--- Comment #7 from Henning Thielemann <llvm at henning-thielemann.de> 2011-01-01 17:49:21 CST ---
(In reply to comment #6)
> Please update to LLVM 2.8, or better yet, mainline.  We do not put out bug
> fixes for previous releases.

I'm continuing to comment on this problem, because it still exists. The only
difference between LLVM-2.6 and LLVM-2.8 is, that the problem now only occurs
when calling the optimizer via LLVM's C interface, and I cannot reproduce it
any longer via 'opt' in the shell. The ticket was closed as FIXED without a
comment how it was fixed. The last comment before closing the ticket was, that
I shall set a target string in LL file and I answered, that I have initially no
LL file, because I'm using JIT (via the C wrapper to LLVM).

I think the following phenomenon is just another instance of the same problem.
I have generated code with the JIT, written to a Bitcode file and disassembled.
Then I have optimized it, trying to match -O1 as good as possible as the C
wrapper around LLVM allows. In the optimized module some getelementptrs to a
struct are replaced by accesses via constant offsets and pointer casting, where
the offsets seem to be made for a 64 bit machine (but I have a 32 bit machine).

optimized by opt (correct):
  %9 = getelementptr %0* %2, i32 0, i32 0, i32 0
  store i1 true, i1* %9
  %10 = getelementptr %0* %2, i32 0, i32 0, i32 1, i32 0, i32 0, i32 0
  store i32 1, i32* %10
  %11 = getelementptr %0* %2, i32 0, i32 0, i32 1, i32 0, i32 0, i32 1, i32 0,
i32 1
  store float 0x3FD99999A0000000, float* %11
  %12 = getelementptr %0* %2, i32 0, i32 0, i32 1, i32 0, i32 1, i32 0, i32 0,
i32 1
  store i32 %4, i32* %12
  %13 = getelementptr %0* %2, i32 0, i32 1, i32 0
  store float 0.000000e+00, float* %13
  %14 = getelementptr %0* %2, i32 0, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1,
i32 0, i32 0, i32 0
  store i32 %6, i32* %14
  %15 = getelementptr %0* %2, i32 0, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1,
i32 0, i32 0, i32 1
  store float* %8, float** %15
  %16 = getelementptr %0* %2, i32 0, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1,
i32 0, i32 1, i32 1
  store float 0.000000e+00, float* %16


optimized by JIT (invalid):
  %9 = bitcast i8* %1 to i1*
  store i1 true, i1* %9, align 1
  %10 = getelementptr i8* %1, i64 4
  %11 = bitcast i8* %10 to i32*
  store i32 1, i32* %11, align 4
  %12 = getelementptr i8* %1, i64 8
  %13 = bitcast i8* %12 to float*
  store float 0x3FD99999A0000000, float* %13, align 4
  %14 = getelementptr i8* %1, i64 12
  %15 = bitcast i8* %14 to i32*
  store i32 %4, i32* %15, align 4
  %16 = getelementptr i8* %1, i64 16
  %17 = bitcast i8* %16 to float*
  store float 0.000000e+00, float* %17, align 4
  %18 = getelementptr i8* %1, i64 24
  %19 = bitcast i8* %18 to i32*
  store i32 %6, i32* %19, align 4
  %20 = getelementptr i8* %1, i64 32
  %21 = bitcast i8* %20 to float**
  store float* %8, float** %21, align 8
  %22 = getelementptr i8* %1, i64 40
  %23 = bitcast i8* %22 to float*
  store float 0.000000e+00, float* %23, align 4


The offsets 16, 24, 32, 40 should have been 16, 20, 24, 28.

The Release Notes of 2.7 say that in LLVM-2.6 no target string meant 'SparcV9'.
Now I tried to reproduce the JIT behaviour via 'opt' by placing 

target triple = "sparcv9"

on top of a disassembled bitcode file. However 'opt' still does not generate
those accesses via constant offsets. The LLVM tools also do not complain about
fictional target names, what made me nervous.

In the end I still do not know, how to set the target specification via C
wrapper around LLVM. I hoped that calling LLVMInitializeX86Target and 
LLVMInitializeX86TargetInfo would be enough, since these are the appropriate
functions for my machine. If I have to set something other (LLVMSetDataLayout?)
then I wonder how to do that (how to find out the appropriate settings for
every target?) and why its default does not match my information given at
initialization time.

Since you said, that the problem is fixed, I assumed that my current problem
must have another reason. Thus I have spent more than 15 hours with trying to
simulate 'opt's behaviour via LLVM's C interface and since JIT-opt and
Shell-opt still differed, I tried to simulate the JIT optimizer behaviour by
'opt'. Without success. I flooded my code with debug output. I tried hard to
export my JIT constructed functions and typical parameter records to disk
together with a driving main program, that allows to narrow the bug using
bugpoint. No luck: JIT fails, LLVM shell tools work. I dissected my code with
GDB in order to see what actually goes wrong.
I am really frustrated now. I can well imagine that I use LLVM the wrong way.
But if it is that easy, then there must be something improved in the JIT or in
the C wrapper. At the very least the JIT optimizer behaviour must be better
documented.

-- 
Configure bugmail: http://llvm.org/bugs/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.



More information about the llvm-bugs mailing list