[LLVMdev] [fwd] Re: LLVM Compiler Infrastructure

Misha Brukman brukman at cs.uiuc.edu
Tue Nov 1 13:09:25 PST 2005


Hi, Yiping!

I am not sure of the answer to your question, but I am forwarding it to
the LLVMdev list where I am sure someone will be able to answer you.

Please send development questions directly to LLVMdev and you will get a
response quicker, as it is read by many LLVM developers.

----- Forwarded message from Yiping Fan <fanyp at cs.ucla.edu> -----

Date: Mon, 31 Oct 2005 17:20:24 -0800
From: "Yiping Fan" <fanyp at cs.ucla.edu>
To: "Misha Brukman" <brukman at uiuc.edu>, "guoling han" <leohgl at cs.ucla.edu>,
        <cong at cs.ucla.edu>, "'Zhiru Zhang'" <zhiruz at cs.ucla.edu>
Cc: "Brian Gaeke" <gaeke at uiuc.edu>
Subject: Re: LLVM Compiler Infrastructure

Hi Misha, 
    How are you doing recently? It has been long time since the last email we exchanged.
We used your LLVM compiler in our xPilot behavioral synthesis system, and made great
progress, thanks to your great job. 
   Now we got a small but annoying problem. When we get the following C code to llvm-gcc,

short test(int x) {
        return (short) (x >> 10);
}

   It will generate the following LLVM code:

; ModuleID = 'test.bc'
target endian = little
target pointersize = 32
target triple = "i686-pc-linux-gnu"
deplibs = [ "c", "crtend" ]

implementation   ; Functions:

short %test(int %x) {
entry:
        %x = cast int %x to uint                ; <uint> [#uses=1]
        %tmp.2 = shr uint %x, ubyte 10          ; <uint> [#uses=1]
        %tmp.3 = cast uint %tmp.2 to short              ; <short> [#uses=1]
        ret short %tmp.3
}

Basically, LLVM frontend will first convert an integer to unsigned integer, and then do shifting. 
It seems that this conversion often occurs for SHIFTING operations. Although it is correct for 32-bit
general purpose processor, it does not fit well for hardware synthesis, since we need 
to use wider components (up to 32-bit) to implement these operations, otherwise we may lose sign flag. 
Our favorite LLVM code is that without (or with very few) CAST, and doing the SHR directly for the signed integers. 

So our question is that do you know there is any option in LLVM to disable this kind of cast-to-unsigned 
code generation?

Thank you very much.

-Yiping Fan
UCLA Computer Science Department 
4651 Boelter Hall 
503 Hilgard Ave, Los Angeles, CA 90095
Tel: 310-206-5449 
Email: fanyp at cs.ucla.edu
WWW: http://ballade.cs.ucla.edu/~fanyp

----- End forwarded message -----

-- 
Misha Brukman :: http://misha.brukman.net :: http://llvm.cs.uiuc.edu




More information about the llvm-dev mailing list