[LLVMdev] i1 Values
Micah Villmow
micah.villmow at softmachines.com
Thu Feb 5 13:14:57 PST 2015
I can see two reasons for it:
1) An integer way to represent -0 and +0 from the floating point domain.
2) unsigned i1 represents 0 and 1(via unsigned values being in the range 0 -> (2^N) - 1, but a signed i1 represents [-]0 and -1(via signed values being in the range -2^(N-1) -> 2^(N-1) - 1. This could be important when promoting to large integers and determining if sign or zero extension is needed.
> -----Original Message-----
> From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu]
> On Behalf Of dag at cray.com
> Sent: Thursday, February 05, 2015 12:49 PM
> To: llvmdev at cs.uiuc.edu
> Subject: [LLVMdev] i1 Values
>
> I've been debugging some strange happenings over here and I put an assert
> in APInt to catch what I think is the source of the problem:
>
> int64_t getSExtValue() const {
> // An i1 -1 is unrepresentable.
> assert(BitWidth != 1 && "Signed i1 value is not representable!");
>
> To me an i1 -1 makes no sense whatsoever. It is not representable in twos-
> complement form. It cannot be distinguished from unsigned i1 1.
>
> It turns out this assert triggers all over the place. Before I dive into this rat
> hole I'd like to confirm with others my sense that we shouldn't be creating i1
> -1 values. Is there some legitimate reason to do so?
>
> -David
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
More information about the llvm-dev
mailing list