[LLVMdev] llvm-gcc miscompilation or it's the gcc's rule?

Rob Barris rbarris at blizzard.com
Sun Jan 13 22:35:14 PST 2008


I don't think C has a way to express 32b x 32b -> 64b multiply, even  
though there is (on x86 anyway) a hardware instruction that does it.

The type of your expression (x * y) is still uint32_t.  The implicit  
type coercion up to uint64_t as part of the return statement doesn't  
change this.

On Jan 13, 2008, at 10:29 PM, Zhou Sheng wrote:

> Hi,
>
> Here is C function:
>
> uint64_t mul(uint32_t x, uint32_t y) {
>   return x * y;
> }
>
> current llvm-gcc-4.0 with -O3 will compile it to:
>
> define i64 @mul(i32 %x, i32 %y) nounwind  {
> entry:
>     %tmp3 = mul i32 %y, %x      ; <i32> [#uses=1]
>     %tmp34 = zext i32 %tmp3 to i64      ; <i64> [#uses=1]
>     ret i64 %tmp34
> }
>
> This seems incorrect. I think it should extend %x, %y to i64 first  
> and then do the multiplication stuff.
> Otherwise, the result may lose some bits if %x, %y are very big.
>
> gcc seems have the same issue. Is this a bug or just gcc's rule?
>
>
> Thanks,
> Sheng.
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev




More information about the llvm-dev mailing list