[PATCH] D46915: [Fixed Point Arithmetic] Fixed Point Precision Bits and Fixed Point Literals
Leonard Chan via Phabricator via cfe-commits
cfe-commits at lists.llvm.org
Thu Jun 14 14:31:02 PDT 2018
leonardchan added inline comments.
================
Comment at: include/clang/Basic/TargetInfo.h:89
+ // corresponding unsaturated types.
+ unsigned char ShortAccumFBits, ShortAccumIBits;
+ unsigned char AccumFBits, AccumIBits;
----------------
ebevhan wrote:
> leonardchan wrote:
> > ebevhan wrote:
> > > leonardchan wrote:
> > > > ebevhan wrote:
> > > > > leonardchan wrote:
> > > > > > ebevhan wrote:
> > > > > > > I suspect it's still possible to calculate the ibits based on the fbits, even for _Accum.
> > > > > > >
> > > > > > > Are the unsigned values needed? The fbits for unsigned _Fract are the same as for signed _Fract if SameFBits is set, and +1 otherwise. The same should go for unsigned _Accum, but I don't think it's entirely clear how this affects the integral part.
> > > > > > Similar to the previous comment, if we choose to make SameFBits dependent on/controlled by the target, then I think it would be better for the target to explicitly specify the integral bits since there could be some cases where there may be padding (such as for unsigned _Accum types if this flag is set), and in this case, I think it should be explicit that the `bit_width != IBits + FBits`.
> > > > > >
> > > > > > We also can't fill in that padding for the unsigned _Accum types as an extra integral bit since the standard says that `"each signed accum type has at least as many integral bits as its corresponding unsigned accum type".`
> > > > > >
> > > > > > For the unsigned _Fract values, I think we can get rid of them if we choose to keep the flag instead, but it would no longer apply to unsigned _Accum types since it would allow for extra padding in these types and would conflict with the logic of `bit_width == IBits + FBits`
> > > > > >
> > > > > > For now, I actually think the simplest option is to keep these target properties, but have the target override them individually, with checks to make sure the values adhere to the standard.
> > > > > Is it not the case that `bitwidth != IBits + FBits` for signed types only? Signed types require a sign bit (which is neither a fractional bit nor an integral bit, according to spec) but unsigned types do not, and if IBits and FBits for the signed and unsigned types are the same, the MSB is simply a padding bit, at least for _Accum (for _Fract everything above the fbits is padding).
> > > > >
> > > > > My reasoning for keeping the number of configurable values down was to limit the number of twiddly knobs to make the implementation simpler. Granting a lot of freedom is nice, but I suspect that it will be quite hard to get the implementation working properly for every valid configuration. I also don't really see much of a reason for `FBits != UFBits` in general. I know the spec gives a recommendation to implement it that way, but I think the benefit of normalizing the signed and unsigned representations outweighs the lost bit in the unsigned type.
> > > > >
> > > > > It's hard to say what the differences are beyond that since I'm not sure how you plan on treating padding bits. In our implementation, padding bits (the MSB in all of the unsigned types for us) after any operation are zeroed.
> > > > I see. The implementation would be much simpler this way. The initial idea was to treat the padding bits as "don't care" bits where we would mask only the data bits when doing operations like comparisons that look at the whole integer.
> > > That does work, but for binary operations it means you might need multiple maskings per operation. If you mask after every operation instead, you only need one. In our implementation, the only padding bit we have is the MSB in the unsigned types. We simply insert a 'bit clear' after every operation that produces an unsigned type (with some exceptions).
> > >
> > > The E-C spec says that padding bits are 'unspecified', but implementation-wise, they have to be defined to something in order to operate on them. So long as you ensure that it is never possible to observe anything but zero in those bits from a program perspective, most operations just work. Naturally, there are ways to subvert this design. If you have a `signed _Fract *` and cast it to an `unsigned _Fract *` and the values happen to be negative, the padding bit will be set and you'll get exciting behavior.
> > >
> > > Although... I just realized that the topmost half of the `_Fract`s in your configuration is all padding. That might make things a bit more complicated for things like signed types... Ideally the sign bit would be extended into the padding for those.
> > Yeah, I initially had 2 general purpose ideas for handling the padding:
> >
> > - Zero/sign extend them after all operations that modify the value (+, *, <<, casting to
> > another type, ...). This way we know the sign part of a fixed point type, regardless of how much padding there is would, always extend to the padding and sign bit of the underlying scaled integer.
> > - Masking only the data and sign bits before all operations that read the value and don't modify it (==, !=, <, ...). This way we only compare the non-padding bits whenever performing operations that require reading all bits in the underlying integer.
> >
> > I wanted to choose the method that addressed operations fixed point types would least likely be used for, that is would these types be generally used more for arithmetic or comparisons. I chose the second method, but don't have any strong evidence to support this. This is also what the later patches implement but can be reworked.
> I see what you mean... I also think it's more likely that the types will be used with arithmetic operators than comparisons. However, I don't have any evidence for it but it feels like it could be dangerous to let 'invalid' values hang around; it's safer to ensure that all values are correct after every operation. In our implementation we do the first method for our unsigned types. I wrote a small example (a FIR filter) to see what the overhead of using unsigned instead of signed was, and the x86 asm went from 20 to 23 instructions, which isn't huge. Granted, we do not use unsigned fixed-point types much (if at all) so overhead there isn't really a problem for us.
>
> I think that one of the important benefits of the first method is that if unsigned fixed-point types with padding always have their padding cleared and the fractional bits are the same, then hardware operations for signed fixed-point types can be reused for the unsigned types.
>
> I'm trying to come up with an example where things can go wrong if padding is not corrected after operations, but I can't seem to do it without invoking overflow, which is UB on both signed and unsigned fixed-point if I'm not mistaken. The E-C spec does not seem to mention anything explicit about what happens when you convert a negative signed fixed-point type to an unsigned one, so I think that's UB as well.
I may go with your recommendation then since this is what you use and tested and I don't have any strong opinions. Also I think conversion from the negative signed to unsigned also falls under UB unless `FX_FRACT/ACCUM_OVERFLOW` is specified to `SAT`.
================
Comment at: lib/AST/ExprConstant.cpp:9437
+ }
+ return Success(-Value, E);
+ }
----------------
ebevhan wrote:
> leonardchan wrote:
> > ebevhan wrote:
> > > This looks very suspect to me as well... This might not respect padding on types whose sign bit is not the MSB, such as _Fract. The same goes for the overflow check above.
> > >
> > > I think it's quite important that we define the semantics of what padding bits should contain. Probably zeroes for unsigned types and a sexted sign bit for signed types.
> > Initially I had the _Accum and their corresponding _Fract types the same widths to avoid casting between _Accums to a different sized _Fract (`ie short _Fract to short _Accum`), but after thinking about it, it might be simpler in the end to just have the _Fract widths have half as much as the _Accum width and have the fractional bits be one less the width (unless SameFBits is set in which case there will still be only 1 padding).
> >
> > This would also reduce the number of knobs that would be tweaked for different targets and help cover this case by effectively ignoring the padding bits.
> Yes, that sounds like a good idea. It is also the case for us that the scale of our _Fract types is the width - 1. This makes many operations that care about sign bits trivial, and conversion isn't that bad, it's just a sign/zero extension which should be no problem so long as the type widths are sensible (_Fract to _Accum would be 8->16, 16->32, 32->64 for the respective sizes).
>
> I guess this means dropping the FractFBits configuration values? I suppose the AccumFBits values cannot be dropped as the specification does not have any restrictions between _Accum and _Fract (I think?).
They do not have restrictions concerning fractional bits between _Accum and _Fract , but it is labelled as a recommendation to keep them consistent:
```
- Each unsigned accum type has the same number of fractional bits as its corresponding unsigned fract type.
- Each signed accum type has the same number of fractional bits as either its corresponding signed fract type or its corresponding unsigned fract type.
```
================
Comment at: lib/Sema/SemaExpr.cpp:1248
+ bool RHSFixed = RHSType->isFixedPointType();
+
+ if (LHSFixed && RHSFixed) {
----------------
ebevhan wrote:
> ebevhan wrote:
> > leonardchan wrote:
> > > ebevhan wrote:
> > > > leonardchan wrote:
> > > > > ebevhan wrote:
> > > > > > I don't see how these semantics work properly. The specification requires that operations be done in the full precision of both types. You cannot convert the types before performing the operation like this, since the operation will not be done in full precision in that case.
> > > > > >
> > > > > > The operator semantics of Embedded-C require the operand types of binary operators to be different. It's only when you've performed the operation that you are allowed to convert the result to the resulting type.
> > > > > Initially the idea was to convert both sides to fixed point types, then perform standard binary operations between the fixed point types.
> > > > >
> > > > > For the example, a `fract * int` would have the int converted to a fixed point type by left shifting it by the scale of the fract, multiplying, then right shifting by the scale again to get the resulting fract. The only unhandled thing is overflow, but the precision of the fract remains the same. The operands would also be casted up beforehand so there was enough space to store the result, which was casted down back to the original fract after performing the right shift by the scale.
> > > > >
> > > > > Operations between fixed point types would follow a similar process of casting both operands to the higher rank fixed point type, and depending on the operation, more underlying shifting and casting would be done to retain full precision of the higher ranked type.
> > > > >
> > > > > Though I will admit that I did not realize until now that multiplying a fixed point type by an integer does not require shifting the integer.
> > > > I see how you've reasoned; this is how C normally works. The `fract` is of higher rank than `int` and therefore is the 'common type' of the operation. However, even though it is higher rank there is no guarantee that you can perform the operation without overflowing. And overflow matters here; the spec says that it must be done in the full precision (integral + fractional) of both types.
> > > >
> > > > > The only unhandled thing is overflow, but the precision of the fract remains the same. The operands would also be casted up beforehand so there was enough space to store the result, which was casted down back to the original fract after performing the right shift by the scale.
> > > >
> > > > The precision remains the same (and while it doesn't have to be the same to perform an operation, it makes the implementation more regular; things like addition and subtraction 'just work'), but you cannot perform a conversion to `fract` *before* the operation itself, since if you do, there's nothing to 'cast up'. Casting up is needed for things like `fract * fract` to prevent overflow, but for `fract * int` you need to cast to a type that can fit both all values of the int and all values of the fract, and *then* you can cast up before doing the multiplication.
> > > >
> > > > > Operations between fixed point types would follow a similar process of casting both operands to the higher rank fixed point type, and depending on the operation, more underlying shifting and casting would be done to retain full precision of the higher ranked type.
> > > >
> > > > This might work, but I feel there could be edge cases. The E-C fixed-point ranks are very odd as they don't reflect reality; `short _Accum` cannot be considered strictly 'above' `long _Fract`, but the former has a higher rank than the latter. Depending on how the types are specified for a target, implicit casts between fixed-point types might inadvertantly discard bits, even though the spec says that operations must be done in full precision.
> > > I see, so just to confirm, something like a `fract * int` would not result in any implicit casting between either operand, but any special arithmetic, like intermediate storage types or saturation handling, would be handled by the underlying IR?
> > >
> > > So should really no conversions/implicit type casting should be performed here and instead all handling of arithmetic operations should happen somewhere during the codegen stage?
> > >
> > > I see, so just to confirm, something like a fract * int would not result in any implicit casting between either operand, but any special arithmetic, like intermediate storage types or saturation handling, would be handled by the underlying IR?
> >
> > Yes, for operations which require precision that cannot be provided by any of the existing types, there must be an 'invisible' implicit conversion to a type which can represent all of the values of either operand. This conversion cannot be represented in the AST as it is today.
> >
> > The simplest solution is indeed to not have any implicit cast at all in the AST and resolve these conversions when needed (CodeGen and consteval are the locations I can think of), but ultimately it feels a bit dirty... I think that the best solution AST-wise is to define a completely new type class (perhaps FullPrecisionFixedPointType) that represents a fixed-point type with arbitrary width, scale, signedness and saturation. Then you can define ImplicitCasts to an instance of this type that can fit both the `int` and the `fract`. I don't know if adding this is acceptable upstream, though.
> >
> > I think all of these rules must apply to fixed-fixed operations as well; a `short accum * long fract` must be done as a type that does not exist, similar to fixed-int. It's not clear how saturation should work here either...
> >
> > I also noticed now that the spec says in regards to comparison operators, `When comparing fixed-point values with fixed-point values or integer values, the values are compared directly; the values of the operands are not converted before the comparison is made.` I'm not sure what this means.
> In any case, to clarify, I think there are two paths to consider. Either:
>
> - Add a new type class to the type system that encapsulates an arbitrary-precision fixed-point type that can be used for implicit casts when operating on fixed-point and integer types. This is in my opinion the cleaner solution, since it retains invariants on the types of operators and simplifies any logic that deals with operators; or,
> - Leave the operands of these operations uncasted. This is in some way simpler, since it doesn't require adding a whole new type, but it complicates other parts of the code. Anything that wants to deal with fixed-point operators will need to know how to do fixed-point conversion as well, which isn't a very good separation of responsibility IMO. It also breaks the C invariant of operands of arithmetic types being in a common type, which might be surprising to people.
>
>
I'm actually more of a fan for the second case. Aside, aside from the literal parsing in NumericLieralParser, wouldn't the only other place that would actually need to know about fixed point conversion be `ScalarExprEmitter` under CodeGen/CGExprScalar.cpp?
It seems that it's this class that creates the binary operations and other code gen classes like CodeGenFunction just make underlying calls to ScalarExprEmitter, so the actual conversion logic may just be contained here. Most of the implicit casting handled under UsualArithmeticConversions seems to be handled by `VisitCastExpr` under ScalarExprEmitter also, so adding another casting type would in the end just result in another case in the switch statement there, which in turn may just result in another call to ScalarExprEmitter.
I can see how it might be weird at first that these types don't fall under usual arithmetic, but the standard does specify that it wouldn't.
================
Comment at: lib/Sema/SemaExpr.cpp:1248
+ bool RHSFixed = RHSType->isFixedPointType();
+
+ if (LHSFixed && RHSFixed) {
----------------
leonardchan wrote:
> ebevhan wrote:
> > ebevhan wrote:
> > > leonardchan wrote:
> > > > ebevhan wrote:
> > > > > leonardchan wrote:
> > > > > > ebevhan wrote:
> > > > > > > I don't see how these semantics work properly. The specification requires that operations be done in the full precision of both types. You cannot convert the types before performing the operation like this, since the operation will not be done in full precision in that case.
> > > > > > >
> > > > > > > The operator semantics of Embedded-C require the operand types of binary operators to be different. It's only when you've performed the operation that you are allowed to convert the result to the resulting type.
> > > > > > Initially the idea was to convert both sides to fixed point types, then perform standard binary operations between the fixed point types.
> > > > > >
> > > > > > For the example, a `fract * int` would have the int converted to a fixed point type by left shifting it by the scale of the fract, multiplying, then right shifting by the scale again to get the resulting fract. The only unhandled thing is overflow, but the precision of the fract remains the same. The operands would also be casted up beforehand so there was enough space to store the result, which was casted down back to the original fract after performing the right shift by the scale.
> > > > > >
> > > > > > Operations between fixed point types would follow a similar process of casting both operands to the higher rank fixed point type, and depending on the operation, more underlying shifting and casting would be done to retain full precision of the higher ranked type.
> > > > > >
> > > > > > Though I will admit that I did not realize until now that multiplying a fixed point type by an integer does not require shifting the integer.
> > > > > I see how you've reasoned; this is how C normally works. The `fract` is of higher rank than `int` and therefore is the 'common type' of the operation. However, even though it is higher rank there is no guarantee that you can perform the operation without overflowing. And overflow matters here; the spec says that it must be done in the full precision (integral + fractional) of both types.
> > > > >
> > > > > > The only unhandled thing is overflow, but the precision of the fract remains the same. The operands would also be casted up beforehand so there was enough space to store the result, which was casted down back to the original fract after performing the right shift by the scale.
> > > > >
> > > > > The precision remains the same (and while it doesn't have to be the same to perform an operation, it makes the implementation more regular; things like addition and subtraction 'just work'), but you cannot perform a conversion to `fract` *before* the operation itself, since if you do, there's nothing to 'cast up'. Casting up is needed for things like `fract * fract` to prevent overflow, but for `fract * int` you need to cast to a type that can fit both all values of the int and all values of the fract, and *then* you can cast up before doing the multiplication.
> > > > >
> > > > > > Operations between fixed point types would follow a similar process of casting both operands to the higher rank fixed point type, and depending on the operation, more underlying shifting and casting would be done to retain full precision of the higher ranked type.
> > > > >
> > > > > This might work, but I feel there could be edge cases. The E-C fixed-point ranks are very odd as they don't reflect reality; `short _Accum` cannot be considered strictly 'above' `long _Fract`, but the former has a higher rank than the latter. Depending on how the types are specified for a target, implicit casts between fixed-point types might inadvertantly discard bits, even though the spec says that operations must be done in full precision.
> > > > I see, so just to confirm, something like a `fract * int` would not result in any implicit casting between either operand, but any special arithmetic, like intermediate storage types or saturation handling, would be handled by the underlying IR?
> > > >
> > > > So should really no conversions/implicit type casting should be performed here and instead all handling of arithmetic operations should happen somewhere during the codegen stage?
> > > >
> > > > I see, so just to confirm, something like a fract * int would not result in any implicit casting between either operand, but any special arithmetic, like intermediate storage types or saturation handling, would be handled by the underlying IR?
> > >
> > > Yes, for operations which require precision that cannot be provided by any of the existing types, there must be an 'invisible' implicit conversion to a type which can represent all of the values of either operand. This conversion cannot be represented in the AST as it is today.
> > >
> > > The simplest solution is indeed to not have any implicit cast at all in the AST and resolve these conversions when needed (CodeGen and consteval are the locations I can think of), but ultimately it feels a bit dirty... I think that the best solution AST-wise is to define a completely new type class (perhaps FullPrecisionFixedPointType) that represents a fixed-point type with arbitrary width, scale, signedness and saturation. Then you can define ImplicitCasts to an instance of this type that can fit both the `int` and the `fract`. I don't know if adding this is acceptable upstream, though.
> > >
> > > I think all of these rules must apply to fixed-fixed operations as well; a `short accum * long fract` must be done as a type that does not exist, similar to fixed-int. It's not clear how saturation should work here either...
> > >
> > > I also noticed now that the spec says in regards to comparison operators, `When comparing fixed-point values with fixed-point values or integer values, the values are compared directly; the values of the operands are not converted before the comparison is made.` I'm not sure what this means.
> > In any case, to clarify, I think there are two paths to consider. Either:
> >
> > - Add a new type class to the type system that encapsulates an arbitrary-precision fixed-point type that can be used for implicit casts when operating on fixed-point and integer types. This is in my opinion the cleaner solution, since it retains invariants on the types of operators and simplifies any logic that deals with operators; or,
> > - Leave the operands of these operations uncasted. This is in some way simpler, since it doesn't require adding a whole new type, but it complicates other parts of the code. Anything that wants to deal with fixed-point operators will need to know how to do fixed-point conversion as well, which isn't a very good separation of responsibility IMO. It also breaks the C invariant of operands of arithmetic types being in a common type, which might be surprising to people.
> >
> >
> I'm actually more of a fan for the second case. Aside, aside from the literal parsing in NumericLieralParser, wouldn't the only other place that would actually need to know about fixed point conversion be `ScalarExprEmitter` under CodeGen/CGExprScalar.cpp?
>
> It seems that it's this class that creates the binary operations and other code gen classes like CodeGenFunction just make underlying calls to ScalarExprEmitter, so the actual conversion logic may just be contained here. Most of the implicit casting handled under UsualArithmeticConversions seems to be handled by `VisitCastExpr` under ScalarExprEmitter also, so adding another casting type would in the end just result in another case in the switch statement there, which in turn may just result in another call to ScalarExprEmitter.
>
> I can see how it might be weird at first that these types don't fall under usual arithmetic, but the standard does specify that it wouldn't.
Regarding comparison operators, my guess is that it means during comparison operations specifically, the actual underlying values of each operand are compared instead of having the special type conversions take place. That is, `1.0k != 1` but `1.0k == 128` (assuming scale of 7). If this is the case, we could actually save a few operations not having to do a shift on the integer.
I also can't seem to find a test case used by GCC where they explicitly compare a fixed point type against an integer. Normally, they instead assign the FP literal to an integral type, then compare that against another integer.
I'm referring to `CONV_ACCUM_INT` in
https://github.com/gcc-mirror/gcc/blob/e11be3ea01eaf8acd8cd86d3f9c427621b64e6b4/gcc/testsuite/gcc.dg/fixed-point/convert.h
Repository:
rC Clang
https://reviews.llvm.org/D46915
More information about the cfe-commits
mailing list