[LLVMdev] 8-bit DIV IR irregularities

Nowicki, Tyler tyler.nowicki at intel.com
Wed Jun 27 17:22:25 PDT 2012


I understand, but this sounds like legalization. Does every architecture trigger an overflow exception, as opposed to setting a bit? Perhaps it makes more sense to do this in the backends that trigger an overflow exception?

I'm working on a modification for DIV right now in the x86 backend for Intel Atom that will improve performance, however because the *actual* operation has been replaced with a 32-bit operation it becomes much more difficult to detect when a real 32-bit divide is happening.

If someone knows where the 8-bit DIV is being handled in the IR I could look into this change?

Tyler

-----Original Message-----
From: Eli Friedman [mailto:eli.friedman at gmail.com] 
Sent: Wednesday, June 27, 2012 19:07 PM
To: Nowicki, Tyler
Cc: llvmdev at cs.uiuc.edu
Subject: Re: [LLVMdev] 8-bit DIV IR irregularities

On Wed, Jun 27, 2012 at 4:02 PM, Nowicki, Tyler <tyler.nowicki at intel.com> wrote:
> Hi,
>
>
>
> I noticed that when dividing with signed 8-bit values the IR uses a 
> 32-bit signed divide, however, when unsigned 8-bit values are used the 
> IR uses an 8-bit unsigned divide. Why not use a 8-bit signed divide 
> when using 8-bit signed values?

"sdiv i8 -128, -1" has undefined behavior; "sdiv i32 -128, -1" is well-defined.

-Eli




More information about the llvm-dev mailing list