[llvm-branch-commits] [llvm] a89d751 - Add intrinsics for saturating float to int casts
Bjorn Pettersson via llvm-branch-commits
llvm-branch-commits at lists.llvm.org
Fri Dec 18 02:14:43 PST 2020
Author: Bjorn Pettersson
Date: 2020-12-18T11:09:41+01:00
New Revision: a89d751fb401540c89189e7c17ff64a6eca98587
URL: https://github.com/llvm/llvm-project/commit/a89d751fb401540c89189e7c17ff64a6eca98587
DIFF: https://github.com/llvm/llvm-project/commit/a89d751fb401540c89189e7c17ff64a6eca98587.diff
LOG: Add intrinsics for saturating float to int casts
This patch adds support for the fptoui.sat and fptosi.sat intrinsics,
which provide basically the same functionality as the existing fptoui
and fptosi instructions, but will saturate (or return 0 for NaN) on
values unrepresentable in the target type, instead of returning
poison. Related mailing list discussion can be found at:
https://groups.google.com/d/msg/llvm-dev/cgDFaBmCnDQ/CZAIMj4IBAAJ
The intrinsics have overloaded source and result type and support
vector operands:
i32 @llvm.fptoui.sat.i32.f32(float %f)
i100 @llvm.fptoui.sat.i100.f64(double %f)
<4 x i32> @llvm.fptoui.sat.v4i32.v4f16(half %f)
// etc
On the SelectionDAG layer two new ISD opcodes are added,
FP_TO_UINT_SAT and FP_TO_SINT_SAT. These opcodes have two operands
and one result. The second operand is an integer constant specifying
the scalar saturation width. The idea here is that initially the
second operand and the scalar width of the result type are the same,
but they may change during type legalization. For example:
i19 @llvm.fptsi.sat.i19.f32(float %f)
// builds
i19 fp_to_sint_sat f, 19
// type legalizes (through integer result promotion)
i32 fp_to_sint_sat f, 19
I went for this approach, because saturated conversion does not
compose well. There is no good way of "adjusting" a saturating
conversion to i32 into one to i19 short of saturating twice.
Specifying the saturation width separately allows directly saturating
to the correct width.
There are two baseline expansions for the fp_to_xint_sat opcodes. If
the integer bounds can be exactly represented in the float type and
fminnum/fmaxnum are legal, we can expand to something like:
f = fmaxnum f, FP(MIN)
f = fminnum f, FP(MAX)
i = fptoxi f
i = select f uo f, 0, i # unnecessary if unsigned as 0 = MIN
If the bounds cannot be exactly represented, we expand to something
like this instead:
i = fptoxi f
i = select f ult FP(MIN), MIN, i
i = select f ogt FP(MAX), MAX, i
i = select f uo f, 0, i # unnecessary if unsigned as 0 = MIN
It should be noted that this expansion assumes a non-trapping fptoxi.
Initial tests are for AArch64, x86_64 and ARM. This exercises all of
the scalar and vector legalization. ARM is included to test float
softening.
Original patch by @nikic and @ebevhan (based on D54696).
Differential Revision: https://reviews.llvm.org/D54749
Added:
llvm/test/CodeGen/AArch64/fptosi-sat-scalar.ll
llvm/test/CodeGen/AArch64/fptosi-sat-vector.ll
llvm/test/CodeGen/AArch64/fptoui-sat-scalar.ll
llvm/test/CodeGen/AArch64/fptoui-sat-vector.ll
llvm/test/CodeGen/ARM/fptosi-sat-scalar.ll
llvm/test/CodeGen/X86/fptosi-sat-scalar.ll
llvm/test/CodeGen/X86/fptoui-sat-scalar.ll
Modified:
llvm/docs/LangRef.rst
llvm/include/llvm/CodeGen/ISDOpcodes.h
llvm/include/llvm/CodeGen/TargetLowering.h
llvm/include/llvm/IR/Intrinsics.td
llvm/include/llvm/Target/TargetSelectionDAG.td
llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
llvm/lib/CodeGen/TargetLoweringBase.cpp
Removed:
################################################################################
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 4102b5d41c05..3db5879129ae 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -16426,6 +16426,120 @@ Examples:
%a = load i16, i16* @x, align 2
%res = call float @llvm.convert.from.fp16(i16 %a)
+Saturating floating-point to integer conversions
+------------------------------------------------
+
+The ``fptoui`` and ``fptosi`` instructions return a
+:ref:`poison value <poisonvalues>` if the rounded-towards-zero value is not
+representable by the result type. These intrinsics provide an alternative
+conversion, which will saturate towards the smallest and largest representable
+integer values instead.
+
+'``llvm.fptoui.sat.*``' Intrinsic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+
+This is an overloaded intrinsic. You can use ``llvm.fptoui.sat`` on any
+floating-point argument type and any integer result type, or vectors thereof.
+Not all targets may support all types, however.
+
+::
+
+ declare i32 @llvm.fptoui.sat.i32.f32(float %f)
+ declare i19 @llvm.fptoui.sat.i19.f64(double %f)
+ declare <4 x i100> @llvm.fptoui.sat.v4i100.v4f128(<4 x fp128> %f)
+
+Overview:
+"""""""""
+
+This intrinsic converts the argument into an unsigned integer using saturating
+semantics.
+
+Arguments:
+""""""""""
+
+The argument may be any floating-point or vector of floating-point type. The
+return value may be any integer or vector of integer type. The number of vector
+elements in argument and return must be the same.
+
+Semantics:
+""""""""""
+
+The conversion to integer is performed subject to the following rules:
+
+- If the argument is any NaN, zero is returned.
+- If the argument is smaller than zero (this includes negative infinity),
+ zero is returned.
+- If the argument is larger than the largest representable unsigned integer of
+ the result type (this includes positive infinity), the largest representable
+ unsigned integer is returned.
+- Otherwise, the result of rounding the argument towards zero is returned.
+
+Example:
+""""""""
+
+.. code-block:: text
+
+ %a = call i8 @llvm.fptoui.sat.i8.f32(float 123.9) ; yields i8: 123
+ %b = call i8 @llvm.fptoui.sat.i8.f32(float -5.7) ; yields i8: 0
+ %c = call i8 @llvm.fptoui.sat.i8.f32(float 377.0) ; yields i8: 255
+ %d = call i8 @llvm.fptoui.sat.i8.f32(float 0xFFF8000000000000) ; yields i8: 0
+
+'``llvm.fptosi.sat.*``' Intrinsic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+
+This is an overloaded intrinsic. You can use ``llvm.fptosi.sat`` on any
+floating-point argument type and any integer result type, or vectors thereof.
+Not all targets may support all types, however.
+
+::
+
+ declare i32 @llvm.fptosi.sat.i32.f32(float %f)
+ declare i19 @llvm.fptosi.sat.i19.f64(double %f)
+ declare <4 x i100> @llvm.fptosi.sat.v4i100.v4f128(<4 x fp128> %f)
+
+Overview:
+"""""""""
+
+This intrinsic converts the argument into a signed integer using saturating
+semantics.
+
+Arguments:
+""""""""""
+
+The argument may be any floating-point or vector of floating-point type. The
+return value may be any integer or vector of integer type. The number of vector
+elements in argument and return must be the same.
+
+Semantics:
+""""""""""
+
+The conversion to integer is performed subject to the following rules:
+
+- If the argument is any NaN, zero is returned.
+- If the argument is smaller than the smallest representable signed integer of
+ the result type (this includes negative infinity), the smallest
+ representable signed integer is returned.
+- If the argument is larger than the largest representable signed integer of
+ the result type (this includes positive infinity), the largest representable
+ signed integer is returned.
+- Otherwise, the result of rounding the argument towards zero is returned.
+
+Example:
+""""""""
+
+.. code-block:: text
+
+ %a = call i8 @llvm.fptosi.sat.i8.f32(float 23.9) ; yields i8: 23
+ %b = call i8 @llvm.fptosi.sat.i8.f32(float -130.8) ; yields i8: -128
+ %c = call i8 @llvm.fptosi.sat.i8.f32(float 999.0) ; yields i8: 127
+ %d = call i8 @llvm.fptosi.sat.i8.f32(float 0xFFF8000000000000) ; yields i8: 0
+
.. _dbg_intrinsics:
Debugger Intrinsics
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 008fea45c6f4..fd7b48b1b207 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -734,6 +734,21 @@ enum NodeType {
FP_TO_SINT,
FP_TO_UINT,
+ /// FP_TO_[US]INT_SAT - Convert floating point value in operand 0 to a
+ /// signed or unsigned integer type with the bit width given in operand 1 with
+ /// the following semantics:
+ ///
+ /// * If the value is NaN, zero is returned.
+ /// * If the value is larger/smaller than the largest/smallest integer,
+ /// the largest/smallest integer is returned (saturation).
+ /// * Otherwise the result of rounding the value towards zero is returned.
+ ///
+ /// The width given in operand 1 must be equal to, or smaller than, the scalar
+ /// result type width. It may end up being smaller than the result witdh as a
+ /// result of integer type legalization.
+ FP_TO_SINT_SAT,
+ FP_TO_UINT_SAT,
+
/// X = FP_ROUND(Y, TRUNC) - Rounding 'Y' from a larger floating point type
/// down to the precision of the destination VT. TRUNC is a flag, which is
/// always an integer that is zero or one. If TRUNC is 0, this is a
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 3dce96d1c064..305107c48750 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -4367,6 +4367,11 @@ class TargetLowering : public TargetLoweringBase {
/// Expand fminnum/fmaxnum into fminnum_ieee/fmaxnum_ieee with quieted inputs.
SDValue expandFMINNUM_FMAXNUM(SDNode *N, SelectionDAG &DAG) const;
+ /// Expand FP_TO_[US]INT_SAT into FP_TO_[US]INT and selects or min/max.
+ /// \param N Node to expand
+ /// \returns The expansion result
+ SDValue expandFP_TO_INT_SAT(SDNode *N, SelectionDAG &DAG) const;
+
/// Expand CTPOP nodes. Expands vector/scalar CTPOP nodes,
/// vector nodes can only succeed if all operations are legal/custom.
/// \param N Node to expand
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 331434bd212d..f71dc147416b 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -1293,6 +1293,12 @@ def int_convert_to_fp16 : DefaultAttrsIntrinsic<[llvm_i16_ty], [llvm_anyfloat_
def int_convert_from_fp16 : DefaultAttrsIntrinsic<[llvm_anyfloat_ty], [llvm_i16_ty]>;
}
+// Saturating floating point to integer intrinsics
+let IntrProperties = [IntrNoMem, IntrSpeculatable, IntrWillReturn] in {
+def int_fptoui_sat : DefaultAttrsIntrinsic<[llvm_anyint_ty], [llvm_anyfloat_ty]>;
+def int_fptosi_sat : DefaultAttrsIntrinsic<[llvm_anyint_ty], [llvm_anyfloat_ty]>;
+}
+
// Clear cache intrinsic, default to ignore (ie. emit nothing)
// maps to void __clear_cache() on supporting platforms
def int_clear_cache : Intrinsic<[], [llvm_ptr_ty, llvm_ptr_ty],
diff --git a/llvm/include/llvm/Target/TargetSelectionDAG.td b/llvm/include/llvm/Target/TargetSelectionDAG.td
index 7ba9f11962c5..d5b8aeb1055d 100644
--- a/llvm/include/llvm/Target/TargetSelectionDAG.td
+++ b/llvm/include/llvm/Target/TargetSelectionDAG.td
@@ -164,6 +164,9 @@ def SDTIntToFPOp : SDTypeProfile<1, 1, [ // [su]int_to_fp
def SDTFPToIntOp : SDTypeProfile<1, 1, [ // fp_to_[su]int
SDTCisInt<0>, SDTCisFP<1>, SDTCisSameNumEltsAs<0, 1>
]>;
+def SDTFPToIntSatOp : SDTypeProfile<1, 2, [ // fp_to_[su]int_sat
+ SDTCisInt<0>, SDTCisFP<1>, SDTCisInt<2>, SDTCisSameNumEltsAs<0, 1>
+]>;
def SDTExtInreg : SDTypeProfile<1, 2, [ // sext_inreg
SDTCisSameAs<0, 1>, SDTCisInt<0>, SDTCisVT<2, OtherVT>,
SDTCisVTSmallerThanOp<2, 1>
@@ -486,6 +489,8 @@ def sint_to_fp : SDNode<"ISD::SINT_TO_FP" , SDTIntToFPOp>;
def uint_to_fp : SDNode<"ISD::UINT_TO_FP" , SDTIntToFPOp>;
def fp_to_sint : SDNode<"ISD::FP_TO_SINT" , SDTFPToIntOp>;
def fp_to_uint : SDNode<"ISD::FP_TO_UINT" , SDTFPToIntOp>;
+def fp_to_sint_sat : SDNode<"ISD::FP_TO_SINT_SAT" , SDTFPToIntSatOp>;
+def fp_to_uint_sat : SDNode<"ISD::FP_TO_UINT_SAT" , SDTFPToIntSatOp>;
def f16_to_fp : SDNode<"ISD::FP16_TO_FP" , SDTIntToFPOp>;
def fp_to_f16 : SDNode<"ISD::FP_TO_FP16" , SDTFPToIntOp>;
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index 657f7cb03249..9e1ea7c81a35 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -179,6 +179,7 @@ class SelectionDAGLegalize {
SmallVectorImpl<SDValue> &Results);
void PromoteLegalFP_TO_INT(SDNode *N, const SDLoc &dl,
SmallVectorImpl<SDValue> &Results);
+ SDValue PromoteLegalFP_TO_INT_SAT(SDNode *Node, const SDLoc &dl);
SDValue ExpandBITREVERSE(SDValue Op, const SDLoc &dl);
SDValue ExpandBSWAP(SDValue Op, const SDLoc &dl);
@@ -1136,10 +1137,11 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
case ISD::SSUBSAT:
case ISD::USUBSAT:
case ISD::SSHLSAT:
- case ISD::USHLSAT: {
+ case ISD::USHLSAT:
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
- }
case ISD::SMULFIX:
case ISD::SMULFIXSAT:
case ISD::UMULFIX:
@@ -2736,6 +2738,30 @@ void SelectionDAGLegalize::PromoteLegalFP_TO_INT(SDNode *N, const SDLoc &dl,
Results.push_back(Operation.getValue(1));
}
+/// Promote FP_TO_*INT_SAT operation to a larger result type. At this point
+/// the result and operand types are legal and there must be a legal
+/// FP_TO_*INT_SAT operation for a larger result type.
+SDValue SelectionDAGLegalize::PromoteLegalFP_TO_INT_SAT(SDNode *Node,
+ const SDLoc &dl) {
+ unsigned Opcode = Node->getOpcode();
+
+ // Scan for the appropriate larger type to use.
+ EVT NewOutTy = Node->getValueType(0);
+ while (true) {
+ NewOutTy = (MVT::SimpleValueType)(NewOutTy.getSimpleVT().SimpleTy + 1);
+ assert(NewOutTy.isInteger() && "Ran out of possibilities!");
+
+ if (TLI.isOperationLegalOrCustom(Opcode, NewOutTy))
+ break;
+ }
+
+ // Saturation width is determined by second operand, so we don't have to
+ // perform any fixup and can directly truncate the result.
+ SDValue Result = DAG.getNode(Opcode, dl, NewOutTy, Node->getOperand(0),
+ Node->getOperand(1));
+ return DAG.getNode(ISD::TRUNCATE, dl, Node->getValueType(0), Result);
+}
+
/// Legalize a BITREVERSE scalar/vector operation as a series of mask + shifts.
SDValue SelectionDAGLegalize::ExpandBITREVERSE(SDValue Op, const SDLoc &dl) {
EVT VT = Op.getValueType();
@@ -3167,6 +3193,10 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
return true;
}
break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Results.push_back(TLI.expandFP_TO_INT_SAT(Node, DAG));
+ break;
case ISD::VAARG:
Results.push_back(DAG.expandVAArg(Node));
Results.push_back(Results[0].getValue(1));
@@ -4642,6 +4672,10 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
case ISD::STRICT_FP_TO_SINT:
PromoteLegalFP_TO_INT(Node, dl, Results);
break;
+ case ISD::FP_TO_UINT_SAT:
+ case ISD::FP_TO_SINT_SAT:
+ Results.push_back(PromoteLegalFP_TO_INT_SAT(Node, dl));
+ break;
case ISD::UINT_TO_FP:
case ISD::STRICT_UINT_TO_FP:
case ISD::SINT_TO_FP:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
index 5c12682f81f9..ccd2bf2cc924 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
@@ -819,6 +819,9 @@ bool DAGTypeLegalizer::SoftenFloatOperand(SDNode *N, unsigned OpNo) {
case ISD::STRICT_FP_TO_UINT:
case ISD::FP_TO_SINT:
case ISD::FP_TO_UINT: Res = SoftenFloatOp_FP_TO_XINT(N); break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Res = SoftenFloatOp_FP_TO_XINT_SAT(N); break;
case ISD::STRICT_LROUND:
case ISD::LROUND: Res = SoftenFloatOp_LROUND(N); break;
case ISD::STRICT_LLROUND:
@@ -954,6 +957,11 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_FP_TO_XINT(SDNode *N) {
return SDValue();
}
+SDValue DAGTypeLegalizer::SoftenFloatOp_FP_TO_XINT_SAT(SDNode *N) {
+ SDValue Res = TLI.expandFP_TO_INT_SAT(N, DAG);
+ return Res;
+}
+
SDValue DAGTypeLegalizer::SoftenFloatOp_SELECT_CC(SDNode *N) {
SDValue NewLHS = N->getOperand(0), NewRHS = N->getOperand(1);
ISD::CondCode CCCode = cast<CondCodeSDNode>(N->getOperand(4))->get();
@@ -2060,6 +2068,9 @@ bool DAGTypeLegalizer::PromoteFloatOperand(SDNode *N, unsigned OpNo) {
case ISD::FCOPYSIGN: R = PromoteFloatOp_FCOPYSIGN(N, OpNo); break;
case ISD::FP_TO_SINT:
case ISD::FP_TO_UINT: R = PromoteFloatOp_FP_TO_XINT(N, OpNo); break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ R = PromoteFloatOp_FP_TO_XINT_SAT(N, OpNo); break;
case ISD::FP_EXTEND: R = PromoteFloatOp_FP_EXTEND(N, OpNo); break;
case ISD::SELECT_CC: R = PromoteFloatOp_SELECT_CC(N, OpNo); break;
case ISD::SETCC: R = PromoteFloatOp_SETCC(N, OpNo); break;
@@ -2103,6 +2114,13 @@ SDValue DAGTypeLegalizer::PromoteFloatOp_FP_TO_XINT(SDNode *N, unsigned OpNo) {
return DAG.getNode(N->getOpcode(), SDLoc(N), N->getValueType(0), Op);
}
+SDValue DAGTypeLegalizer::PromoteFloatOp_FP_TO_XINT_SAT(SDNode *N,
+ unsigned OpNo) {
+ SDValue Op = GetPromotedFloat(N->getOperand(0));
+ return DAG.getNode(N->getOpcode(), SDLoc(N), N->getValueType(0), Op,
+ N->getOperand(1));
+}
+
SDValue DAGTypeLegalizer::PromoteFloatOp_FP_EXTEND(SDNode *N, unsigned OpNo) {
SDValue Op = GetPromotedFloat(N->getOperand(0));
EVT VT = N->getValueType(0);
@@ -2846,6 +2864,9 @@ bool DAGTypeLegalizer::SoftPromoteHalfOperand(SDNode *N, unsigned OpNo) {
case ISD::FCOPYSIGN: Res = SoftPromoteHalfOp_FCOPYSIGN(N, OpNo); break;
case ISD::FP_TO_SINT:
case ISD::FP_TO_UINT: Res = SoftPromoteHalfOp_FP_TO_XINT(N); break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Res = SoftPromoteHalfOp_FP_TO_XINT_SAT(N); break;
case ISD::STRICT_FP_EXTEND:
case ISD::FP_EXTEND: Res = SoftPromoteHalfOp_FP_EXTEND(N); break;
case ISD::SELECT_CC: Res = SoftPromoteHalfOp_SELECT_CC(N, OpNo); break;
@@ -2915,6 +2936,20 @@ SDValue DAGTypeLegalizer::SoftPromoteHalfOp_FP_TO_XINT(SDNode *N) {
return DAG.getNode(N->getOpcode(), dl, N->getValueType(0), Res);
}
+SDValue DAGTypeLegalizer::SoftPromoteHalfOp_FP_TO_XINT_SAT(SDNode *N) {
+ SDValue Op = N->getOperand(0);
+ SDLoc dl(N);
+
+ EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType());
+
+ Op = GetSoftPromotedHalf(Op);
+
+ SDValue Res = DAG.getNode(ISD::FP16_TO_FP, dl, NVT, Op);
+
+ return DAG.getNode(N->getOpcode(), dl, N->getValueType(0), Res,
+ N->getOperand(1));
+}
+
SDValue DAGTypeLegalizer::SoftPromoteHalfOp_SELECT_CC(SDNode *N,
unsigned OpNo) {
assert(OpNo == 0 && "Can only soften the comparison values");
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 5c8a562ed9d7..4a686bc227de 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -123,6 +123,10 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
case ISD::FP_TO_SINT:
case ISD::FP_TO_UINT: Res = PromoteIntRes_FP_TO_XINT(N); break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Res = PromoteIntRes_FP_TO_XINT_SAT(N); break;
+
case ISD::FP_TO_FP16: Res = PromoteIntRes_FP_TO_FP16(N); break;
case ISD::FLT_ROUNDS_: Res = PromoteIntRes_FLT_ROUNDS(N); break;
@@ -596,6 +600,14 @@ SDValue DAGTypeLegalizer::PromoteIntRes_FP_TO_XINT(SDNode *N) {
DAG.getValueType(N->getValueType(0).getScalarType()));
}
+SDValue DAGTypeLegalizer::PromoteIntRes_FP_TO_XINT_SAT(SDNode *N) {
+ // Promote the result type, while keeping the original width in Op1.
+ EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
+ SDLoc dl(N);
+ return DAG.getNode(N->getOpcode(), dl, NVT, N->getOperand(0),
+ N->getOperand(1));
+}
+
SDValue DAGTypeLegalizer::PromoteIntRes_FP_TO_FP16(SDNode *N) {
EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
SDLoc dl(N);
@@ -2045,6 +2057,8 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
case ISD::FP_TO_SINT: ExpandIntRes_FP_TO_SINT(N, Lo, Hi); break;
case ISD::STRICT_FP_TO_UINT:
case ISD::FP_TO_UINT: ExpandIntRes_FP_TO_UINT(N, Lo, Hi); break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT: ExpandIntRes_FP_TO_XINT_SAT(N, Lo, Hi); break;
case ISD::STRICT_LLROUND:
case ISD::STRICT_LLRINT:
case ISD::LLROUND:
@@ -3050,6 +3064,12 @@ void DAGTypeLegalizer::ExpandIntRes_FP_TO_UINT(SDNode *N, SDValue &Lo,
ReplaceValueWith(SDValue(N, 1), Tmp.second);
}
+void DAGTypeLegalizer::ExpandIntRes_FP_TO_XINT_SAT(SDNode *N, SDValue &Lo,
+ SDValue &Hi) {
+ SDValue Res = TLI.expandFP_TO_INT_SAT(N, DAG);
+ SplitInteger(Res, Lo, Hi);
+}
+
void DAGTypeLegalizer::ExpandIntRes_LLROUND_LLRINT(SDNode *N, SDValue &Lo,
SDValue &Hi) {
SDValue Op = N->getOperand(N->isStrictFPOpcode() ? 1 : 0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index c267016cf37e..630a0a9adaf7 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -315,6 +315,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_CTTZ(SDNode *N);
SDValue PromoteIntRes_EXTRACT_VECTOR_ELT(SDNode *N);
SDValue PromoteIntRes_FP_TO_XINT(SDNode *N);
+ SDValue PromoteIntRes_FP_TO_XINT_SAT(SDNode *N);
SDValue PromoteIntRes_FP_TO_FP16(SDNode *N);
SDValue PromoteIntRes_FREEZE(SDNode *N);
SDValue PromoteIntRes_INT_EXTEND(SDNode *N);
@@ -424,6 +425,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
void ExpandIntRes_FLT_ROUNDS (SDNode *N, SDValue &Lo, SDValue &Hi);
void ExpandIntRes_FP_TO_SINT (SDNode *N, SDValue &Lo, SDValue &Hi);
void ExpandIntRes_FP_TO_UINT (SDNode *N, SDValue &Lo, SDValue &Hi);
+ void ExpandIntRes_FP_TO_XINT_SAT (SDNode *N, SDValue &Lo, SDValue &Hi);
void ExpandIntRes_LLROUND_LLRINT (SDNode *N, SDValue &Lo, SDValue &Hi);
void ExpandIntRes_Logical (SDNode *N, SDValue &Lo, SDValue &Hi);
@@ -561,6 +563,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue SoftenFloatOp_BR_CC(SDNode *N);
SDValue SoftenFloatOp_FP_ROUND(SDNode *N);
SDValue SoftenFloatOp_FP_TO_XINT(SDNode *N);
+ SDValue SoftenFloatOp_FP_TO_XINT_SAT(SDNode *N);
SDValue SoftenFloatOp_LROUND(SDNode *N);
SDValue SoftenFloatOp_LLROUND(SDNode *N);
SDValue SoftenFloatOp_LRINT(SDNode *N);
@@ -678,6 +681,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteFloatOp_FCOPYSIGN(SDNode *N, unsigned OpNo);
SDValue PromoteFloatOp_FP_EXTEND(SDNode *N, unsigned OpNo);
SDValue PromoteFloatOp_FP_TO_XINT(SDNode *N, unsigned OpNo);
+ SDValue PromoteFloatOp_FP_TO_XINT_SAT(SDNode *N, unsigned OpNo);
SDValue PromoteFloatOp_STORE(SDNode *N, unsigned OpNo);
SDValue PromoteFloatOp_SELECT_CC(SDNode *N, unsigned OpNo);
SDValue PromoteFloatOp_SETCC(SDNode *N, unsigned OpNo);
@@ -717,6 +721,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue SoftPromoteHalfOp_FCOPYSIGN(SDNode *N, unsigned OpNo);
SDValue SoftPromoteHalfOp_FP_EXTEND(SDNode *N);
SDValue SoftPromoteHalfOp_FP_TO_XINT(SDNode *N);
+ SDValue SoftPromoteHalfOp_FP_TO_XINT_SAT(SDNode *N);
SDValue SoftPromoteHalfOp_SETCC(SDNode *N);
SDValue SoftPromoteHalfOp_SELECT_CC(SDNode *N, unsigned OpNo);
SDValue SoftPromoteHalfOp_STORE(SDNode *N, unsigned OpNo);
@@ -761,6 +766,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue ScalarizeVecRes_SETCC(SDNode *N);
SDValue ScalarizeVecRes_UNDEF(SDNode *N);
SDValue ScalarizeVecRes_VECTOR_SHUFFLE(SDNode *N);
+ SDValue ScalarizeVecRes_FP_TO_XINT_SAT(SDNode *N);
SDValue ScalarizeVecRes_FIX(SDNode *N);
@@ -830,6 +836,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
void SplitVecRes_VECTOR_SHUFFLE(ShuffleVectorSDNode *N, SDValue &Lo,
SDValue &Hi);
void SplitVecRes_VAARG(SDNode *N, SDValue &Lo, SDValue &Hi);
+ void SplitVecRes_FP_TO_XINT_SAT(SDNode *N, SDValue &Lo, SDValue &Hi);
// Vector Operand Splitting: <128 x ty> -> 2 x <64 x ty>.
bool SplitVectorOperand(SDNode *N, unsigned OpNo);
@@ -852,6 +859,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue SplitVecOp_VSETCC(SDNode *N);
SDValue SplitVecOp_FP_ROUND(SDNode *N);
SDValue SplitVecOp_FCOPYSIGN(SDNode *N);
+ SDValue SplitVecOp_FP_TO_XINT_SAT(SDNode *N);
//===--------------------------------------------------------------------===//
// Vector Widening Support: LegalizeVectorTypes.cpp
@@ -900,6 +908,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue WidenVecRes_OverflowOp(SDNode *N, unsigned ResNo);
SDValue WidenVecRes_Convert(SDNode *N);
SDValue WidenVecRes_Convert_StrictFP(SDNode *N);
+ SDValue WidenVecRes_FP_TO_XINT_SAT(SDNode *N);
SDValue WidenVecRes_FCOPYSIGN(SDNode *N);
SDValue WidenVecRes_POWI(SDNode *N);
SDValue WidenVecRes_Unary(SDNode *N);
@@ -921,6 +930,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue WidenVecOp_VSELECT(SDNode *N);
SDValue WidenVecOp_Convert(SDNode *N);
+ SDValue WidenVecOp_FP_TO_XINT_SAT(SDNode *N);
SDValue WidenVecOp_FCOPYSIGN(SDNode *N);
SDValue WidenVecOp_VECREDUCE(SDNode *N);
SDValue WidenVecOp_VECREDUCE_SEQ(SDNode *N);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index db44ea2553ce..4015a5a0ce70 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -455,6 +455,8 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::USUBSAT:
case ISD::SSHLSAT:
case ISD::USHLSAT:
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 3c642df7ba11..f21ec1dbdfe5 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -162,6 +162,11 @@ void DAGTypeLegalizer::ScalarizeVectorResult(SDNode *N, unsigned ResNo) {
R = ScalarizeVecRes_StrictFPOp(N);
break;
+ case ISD::FP_TO_UINT_SAT:
+ case ISD::FP_TO_SINT_SAT:
+ R = ScalarizeVecRes_FP_TO_XINT_SAT(N);
+ break;
+
case ISD::UADDO:
case ISD::SADDO:
case ISD::USUBO:
@@ -516,6 +521,23 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_VECTOR_SHUFFLE(SDNode *N) {
return GetScalarizedVector(N->getOperand(Op));
}
+SDValue DAGTypeLegalizer::ScalarizeVecRes_FP_TO_XINT_SAT(SDNode *N) {
+ SDValue Src = N->getOperand(0);
+ EVT SrcVT = Src.getValueType();
+ SDLoc dl(N);
+
+ // Handle case where result is scalarized but operand is not
+ if (getTypeAction(SrcVT) == TargetLowering::TypeScalarizeVector)
+ Src = GetScalarizedVector(Src);
+ else
+ Src = DAG.getNode(
+ ISD::EXTRACT_VECTOR_ELT, dl, SrcVT.getVectorElementType(), Src,
+ DAG.getConstant(0, dl, TLI.getVectorIdxTy(DAG.getDataLayout())));
+
+ EVT DstVT = N->getValueType(0).getVectorElementType();
+ return DAG.getNode(N->getOpcode(), dl, DstVT, Src, N->getOperand(1));
+}
+
SDValue DAGTypeLegalizer::ScalarizeVecRes_SETCC(SDNode *N) {
assert(N->getValueType(0).isVector() &&
N->getOperand(0).getValueType().isVector() &&
@@ -1015,6 +1037,11 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
SplitVecRes_StrictFPOp(N, Lo, Hi);
break;
+ case ISD::FP_TO_UINT_SAT:
+ case ISD::FP_TO_SINT_SAT:
+ SplitVecRes_FP_TO_XINT_SAT(N, Lo, Hi);
+ break;
+
case ISD::UADDO:
case ISD::SADDO:
case ISD::USUBO:
@@ -2032,6 +2059,22 @@ void DAGTypeLegalizer::SplitVecRes_VAARG(SDNode *N, SDValue &Lo, SDValue &Hi) {
ReplaceValueWith(SDValue(N, 1), Chain);
}
+void DAGTypeLegalizer::SplitVecRes_FP_TO_XINT_SAT(SDNode *N, SDValue &Lo,
+ SDValue &Hi) {
+ EVT DstVTLo, DstVTHi;
+ std::tie(DstVTLo, DstVTHi) = DAG.GetSplitDestVTs(N->getValueType(0));
+ SDLoc dl(N);
+
+ SDValue SrcLo, SrcHi;
+ EVT SrcVT = N->getOperand(0).getValueType();
+ if (getTypeAction(SrcVT) == TargetLowering::TypeSplitVector)
+ GetSplitVector(N->getOperand(0), SrcLo, SrcHi);
+ else
+ std::tie(SrcLo, SrcHi) = DAG.SplitVectorOperand(N, 0);
+
+ Lo = DAG.getNode(N->getOpcode(), dl, DstVTLo, SrcLo, N->getOperand(1));
+ Hi = DAG.getNode(N->getOpcode(), dl, DstVTHi, SrcHi, N->getOperand(1));
+}
//===----------------------------------------------------------------------===//
// Operand Vector Splitting
@@ -2096,6 +2139,10 @@ bool DAGTypeLegalizer::SplitVectorOperand(SDNode *N, unsigned OpNo) {
else
Res = SplitVecOp_UnaryOp(N);
break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Res = SplitVecOp_FP_TO_XINT_SAT(N);
+ break;
case ISD::FP_TO_SINT:
case ISD::FP_TO_UINT:
case ISD::STRICT_FP_TO_SINT:
@@ -2842,6 +2889,22 @@ SDValue DAGTypeLegalizer::SplitVecOp_FCOPYSIGN(SDNode *N) {
return DAG.UnrollVectorOp(N, N->getValueType(0).getVectorNumElements());
}
+SDValue DAGTypeLegalizer::SplitVecOp_FP_TO_XINT_SAT(SDNode *N) {
+ EVT ResVT = N->getValueType(0);
+ SDValue Lo, Hi;
+ SDLoc dl(N);
+ GetSplitVector(N->getOperand(0), Lo, Hi);
+ EVT InVT = Lo.getValueType();
+
+ EVT NewResVT =
+ EVT::getVectorVT(*DAG.getContext(), ResVT.getVectorElementType(),
+ InVT.getVectorElementCount());
+
+ Lo = DAG.getNode(N->getOpcode(), dl, NewResVT, Lo, N->getOperand(1));
+ Hi = DAG.getNode(N->getOpcode(), dl, NewResVT, Hi, N->getOperand(1));
+
+ return DAG.getNode(ISD::CONCAT_VECTORS, dl, ResVT, Lo, Hi);
+}
//===----------------------------------------------------------------------===//
// Result Vector Widening
@@ -2986,6 +3049,11 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
Res = WidenVecRes_Convert(N);
break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Res = WidenVecRes_FP_TO_XINT_SAT(N);
+ break;
+
case ISD::FABS:
case ISD::FCEIL:
case ISD::FCOS:
@@ -3495,6 +3563,27 @@ SDValue DAGTypeLegalizer::WidenVecRes_Convert(SDNode *N) {
return DAG.getBuildVector(WidenVT, DL, Ops);
}
+SDValue DAGTypeLegalizer::WidenVecRes_FP_TO_XINT_SAT(SDNode *N) {
+ SDLoc dl(N);
+ EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
+ ElementCount WidenNumElts = WidenVT.getVectorElementCount();
+
+ SDValue Src = N->getOperand(0);
+ EVT SrcVT = Src.getValueType();
+
+ // Also widen the input.
+ if (getTypeAction(SrcVT) == TargetLowering::TypeWidenVector) {
+ Src = GetWidenedVector(Src);
+ SrcVT = Src.getValueType();
+ }
+
+ // Input and output not widened to the same size, give up.
+ if (WidenNumElts != SrcVT.getVectorElementCount())
+ return DAG.UnrollVectorOp(N, WidenNumElts.getKnownMinValue());
+
+ return DAG.getNode(N->getOpcode(), dl, WidenVT, Src, N->getOperand(1));
+}
+
SDValue DAGTypeLegalizer::WidenVecRes_Convert_StrictFP(SDNode *N) {
SDValue InOp = N->getOperand(1);
SDLoc DL(N);
@@ -4413,6 +4502,11 @@ bool DAGTypeLegalizer::WidenVectorOperand(SDNode *N, unsigned OpNo) {
Res = WidenVecOp_Convert(N);
break;
+ case ISD::FP_TO_SINT_SAT:
+ case ISD::FP_TO_UINT_SAT:
+ Res = WidenVecOp_FP_TO_XINT_SAT(N);
+ break;
+
case ISD::VECREDUCE_FADD:
case ISD::VECREDUCE_FMUL:
case ISD::VECREDUCE_ADD:
@@ -4586,6 +4680,28 @@ SDValue DAGTypeLegalizer::WidenVecOp_Convert(SDNode *N) {
return DAG.getBuildVector(VT, dl, Ops);
}
+SDValue DAGTypeLegalizer::WidenVecOp_FP_TO_XINT_SAT(SDNode *N) {
+ EVT DstVT = N->getValueType(0);
+ SDValue Src = GetWidenedVector(N->getOperand(0));
+ EVT SrcVT = Src.getValueType();
+ ElementCount WideNumElts = SrcVT.getVectorElementCount();
+ SDLoc dl(N);
+
+ // See if a widened result type would be legal, if so widen the node.
+ EVT WideDstVT = EVT::getVectorVT(*DAG.getContext(),
+ DstVT.getVectorElementType(), WideNumElts);
+ if (TLI.isTypeLegal(WideDstVT)) {
+ SDValue Res =
+ DAG.getNode(N->getOpcode(), dl, WideDstVT, Src, N->getOperand(1));
+ return DAG.getNode(
+ ISD::EXTRACT_SUBVECTOR, dl, DstVT, Res,
+ DAG.getConstant(0, dl, TLI.getVectorIdxTy(DAG.getDataLayout())));
+ }
+
+ // Give up and unroll.
+ return DAG.UnrollVectorOp(N);
+}
+
SDValue DAGTypeLegalizer::WidenVecOp_BITCAST(SDNode *N) {
EVT VT = N->getValueType(0);
SDValue InOp = GetWidenedVector(N->getOperand(0));
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index b9a53f34eb88..a145eccde74f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -6183,6 +6183,20 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
DAG.getNode(ISD::BITCAST, sdl, MVT::f16,
getValue(I.getArgOperand(0)))));
return;
+ case Intrinsic::fptosi_sat: {
+ EVT Type = TLI.getValueType(DAG.getDataLayout(), I.getType());
+ SDValue SatW = DAG.getConstant(Type.getScalarSizeInBits(), sdl, MVT::i32);
+ setValue(&I, DAG.getNode(ISD::FP_TO_SINT_SAT, sdl, Type,
+ getValue(I.getArgOperand(0)), SatW));
+ return;
+ }
+ case Intrinsic::fptoui_sat: {
+ EVT Type = TLI.getValueType(DAG.getDataLayout(), I.getType());
+ SDValue SatW = DAG.getConstant(Type.getScalarSizeInBits(), sdl, MVT::i32);
+ setValue(&I, DAG.getNode(ISD::FP_TO_UINT_SAT, sdl, Type,
+ getValue(I.getArgOperand(0)), SatW));
+ return;
+ }
case Intrinsic::pcmarker: {
SDValue Tmp = getValue(I.getArgOperand(0));
DAG.setRoot(DAG.getNode(ISD::PCMARKER, sdl, MVT::Other, getRoot(), Tmp));
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 82b4de3d5449..d867f3e09e9c 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -348,6 +348,8 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
case ISD::STRICT_FP_TO_SINT: return "strict_fp_to_sint";
case ISD::FP_TO_UINT: return "fp_to_uint";
case ISD::STRICT_FP_TO_UINT: return "strict_fp_to_uint";
+ case ISD::FP_TO_SINT_SAT: return "fp_to_sint_sat";
+ case ISD::FP_TO_UINT_SAT: return "fp_to_uint_sat";
case ISD::BITCAST: return "bitcast";
case ISD::ADDRSPACECAST: return "addrspacecast";
case ISD::FP16_TO_FP: return "fp16_to_fp";
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 27060c373779..d895a53e5a83 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -8193,3 +8193,105 @@ bool TargetLowering::expandREM(SDNode *Node, SDValue &Result,
}
return false;
}
+
+SDValue TargetLowering::expandFP_TO_INT_SAT(SDNode *Node,
+ SelectionDAG &DAG) const {
+ bool IsSigned = Node->getOpcode() == ISD::FP_TO_SINT_SAT;
+ SDLoc dl(SDValue(Node, 0));
+ SDValue Src = Node->getOperand(0);
+
+ // DstVT is the result type, while SatVT is the size to which we saturate
+ EVT SrcVT = Src.getValueType();
+ EVT DstVT = Node->getValueType(0);
+
+ unsigned SatWidth = Node->getConstantOperandVal(1);
+ unsigned DstWidth = DstVT.getScalarSizeInBits();
+ assert(SatWidth <= DstWidth &&
+ "Expected saturation width smaller than result width");
+
+ // Determine minimum and maximum integer values and their corresponding
+ // floating-point values.
+ APInt MinInt, MaxInt;
+ if (IsSigned) {
+ MinInt = APInt::getSignedMinValue(SatWidth).sextOrSelf(DstWidth);
+ MaxInt = APInt::getSignedMaxValue(SatWidth).sextOrSelf(DstWidth);
+ } else {
+ MinInt = APInt::getMinValue(SatWidth).zextOrSelf(DstWidth);
+ MaxInt = APInt::getMaxValue(SatWidth).zextOrSelf(DstWidth);
+ }
+
+ // We cannot risk emitting FP_TO_XINT nodes with a source VT of f16, as
+ // libcall emission cannot handle this. Large result types will fail.
+ if (SrcVT == MVT::f16) {
+ Src = DAG.getNode(ISD::FP_EXTEND, dl, MVT::f32, Src);
+ SrcVT = Src.getValueType();
+ }
+
+ APFloat MinFloat(DAG.EVTToAPFloatSemantics(SrcVT));
+ APFloat MaxFloat(DAG.EVTToAPFloatSemantics(SrcVT));
+
+ APFloat::opStatus MinStatus =
+ MinFloat.convertFromAPInt(MinInt, IsSigned, APFloat::rmTowardZero);
+ APFloat::opStatus MaxStatus =
+ MaxFloat.convertFromAPInt(MaxInt, IsSigned, APFloat::rmTowardZero);
+ bool AreExactFloatBounds = !(MinStatus & APFloat::opStatus::opInexact) &&
+ !(MaxStatus & APFloat::opStatus::opInexact);
+
+ SDValue MinFloatNode = DAG.getConstantFP(MinFloat, dl, SrcVT);
+ SDValue MaxFloatNode = DAG.getConstantFP(MaxFloat, dl, SrcVT);
+
+ // If the integer bounds are exactly representable as floats and min/max are
+ // legal, emit a min+max+fptoi sequence. Otherwise we have to use a sequence
+ // of comparisons and selects.
+ bool MinMaxLegal = isOperationLegal(ISD::FMINNUM, SrcVT) &&
+ isOperationLegal(ISD::FMAXNUM, SrcVT);
+ if (AreExactFloatBounds && MinMaxLegal) {
+ SDValue Clamped = Src;
+
+ // Clamp Src by MinFloat from below. If Src is NaN the result is MinFloat.
+ Clamped = DAG.getNode(ISD::FMAXNUM, dl, SrcVT, Clamped, MinFloatNode);
+ // Clamp by MaxFloat from above. NaN cannot occur.
+ Clamped = DAG.getNode(ISD::FMINNUM, dl, SrcVT, Clamped, MaxFloatNode);
+ // Convert clamped value to integer.
+ SDValue FpToInt = DAG.getNode(IsSigned ? ISD::FP_TO_SINT : ISD::FP_TO_UINT,
+ dl, DstVT, Clamped);
+
+ // In the unsigned case we're done, because we mapped NaN to MinFloat,
+ // which will cast to zero.
+ if (!IsSigned)
+ return FpToInt;
+
+ // Otherwise, select 0 if Src is NaN.
+ SDValue ZeroInt = DAG.getConstant(0, dl, DstVT);
+ return DAG.getSelectCC(dl, Src, Src, ZeroInt, FpToInt,
+ ISD::CondCode::SETUO);
+ }
+
+ SDValue MinIntNode = DAG.getConstant(MinInt, dl, DstVT);
+ SDValue MaxIntNode = DAG.getConstant(MaxInt, dl, DstVT);
+
+ // Result of direct conversion. The assumption here is that the operation is
+ // non-trapping and it's fine to apply it to an out-of-range value if we
+ // select it away later.
+ SDValue FpToInt =
+ DAG.getNode(IsSigned ? ISD::FP_TO_SINT : ISD::FP_TO_UINT, dl, DstVT, Src);
+
+ SDValue Select = FpToInt;
+
+ // If Src ULT MinFloat, select MinInt. In particular, this also selects
+ // MinInt if Src is NaN.
+ Select = DAG.getSelectCC(dl, Src, MinFloatNode, MinIntNode, Select,
+ ISD::CondCode::SETULT);
+ // If Src OGT MaxFloat, select MaxInt.
+ Select = DAG.getSelectCC(dl, Src, MaxFloatNode, MaxIntNode, Select,
+ ISD::CondCode::SETOGT);
+
+ // In the unsigned case we are done, because we mapped NaN to MinInt, which
+ // is already zero.
+ if (!IsSigned)
+ return Select;
+
+ // Otherwise, select 0 if Src is NaN.
+ SDValue ZeroInt = DAG.getConstant(0, dl, DstVT);
+ return DAG.getSelectCC(dl, Src, Src, ZeroInt, Select, ISD::CondCode::SETUO);
+}
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 553434cdd5fa..5797006c76fb 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -779,6 +779,8 @@ void TargetLoweringBase::initActions() {
setOperationAction(ISD::SDIVFIXSAT, VT, Expand);
setOperationAction(ISD::UDIVFIX, VT, Expand);
setOperationAction(ISD::UDIVFIXSAT, VT, Expand);
+ setOperationAction(ISD::FP_TO_SINT_SAT, VT, Expand);
+ setOperationAction(ISD::FP_TO_UINT_SAT, VT, Expand);
// Overflow operations default to expand
setOperationAction(ISD::SADDO, VT, Expand);
diff --git a/llvm/test/CodeGen/AArch64/fptosi-sat-scalar.ll b/llvm/test/CodeGen/AArch64/fptosi-sat-scalar.ll
new file mode 100644
index 000000000000..7f57d5b771ed
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/fptosi-sat-scalar.ll
@@ -0,0 +1,676 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=aarch64 < %s | FileCheck %s
+
+;
+; 32-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f32 (float)
+declare i8 @llvm.fptosi.sat.i8.f32 (float)
+declare i13 @llvm.fptosi.sat.i13.f32 (float)
+declare i16 @llvm.fptosi.sat.i16.f32 (float)
+declare i19 @llvm.fptosi.sat.i19.f32 (float)
+declare i32 @llvm.fptosi.sat.i32.f32 (float)
+declare i50 @llvm.fptosi.sat.i50.f32 (float)
+declare i64 @llvm.fptosi.sat.i64.f32 (float)
+declare i100 @llvm.fptosi.sat.i100.f32(float)
+declare i128 @llvm.fptosi.sat.i128.f32(float)
+
+define i1 @test_signed_i1_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i1_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fmov s1, #-1.00000000
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: and w0, w8, #0x1
+; CHECK-NEXT: ret
+ %x = call i1 @llvm.fptosi.sat.i1.f32(float %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i8_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-1023410176
+; CHECK-NEXT: mov w9, #1123942400
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i8 @llvm.fptosi.sat.i8.f32(float %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i13_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-981467136
+; CHECK-NEXT: mov w9, #61440
+; CHECK-NEXT: movk w9, #17791, lsl #16
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i13 @llvm.fptosi.sat.i13.f32(float %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i16_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-956301312
+; CHECK-NEXT: mov w9, #65024
+; CHECK-NEXT: movk w9, #18175, lsl #16
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i16 @llvm.fptosi.sat.i16.f32(float %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i19_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-931135488
+; CHECK-NEXT: mov w9, #65472
+; CHECK-NEXT: movk w9, #18559, lsl #16
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i19 @llvm.fptosi.sat.i19.f32(float %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i32_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #-822083584
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: mov w10, #-2147483648
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w11
+; CHECK-NEXT: mov w12, #2147483647
+; CHECK-NEXT: csel w8, w10, w8, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i32 @llvm.fptosi.sat.i32.f32(float %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i50_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #-671088640
+; CHECK-NEXT: mov w11, #1476395007
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: fcvtzs x8, s0
+; CHECK-NEXT: mov x10, #-562949953421312
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w11
+; CHECK-NEXT: mov x12, #562949953421311
+; CHECK-NEXT: csel x8, x10, x8, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel x8, x12, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call i50 @llvm.fptosi.sat.i50.f32(float %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i64_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #-553648128
+; CHECK-NEXT: mov w11, #1593835519
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: fcvtzs x8, s0
+; CHECK-NEXT: mov x10, #-9223372036854775808
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w11
+; CHECK-NEXT: mov x12, #9223372036854775807
+; CHECK-NEXT: csel x8, x10, x8, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel x8, x12, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call i64 @llvm.fptosi.sat.i64.f32(float %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i100_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: mov w8, #-251658240
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov w8, #1895825407
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov x8, #-34359738368
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: csel x8, x8, x1, lt
+; CHECK-NEXT: mov x9, #34359738367
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: csinv x9, x10, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: csel x0, xzr, x9, vs
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i100 @llvm.fptosi.sat.i100.f32(float %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f32(float %f) nounwind {
+; CHECK-LABEL: test_signed_i128_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: mov w8, #-16777216
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov w8, #2130706431
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov x8, #-9223372036854775808
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: csel x8, x8, x1, lt
+; CHECK-NEXT: mov x9, #9223372036854775807
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: csinv x9, x10, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: csel x0, xzr, x9, vs
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i128 @llvm.fptosi.sat.i128.f32(float %f)
+ ret i128 %x
+}
+
+;
+; 64-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f64 (double)
+declare i8 @llvm.fptosi.sat.i8.f64 (double)
+declare i13 @llvm.fptosi.sat.i13.f64 (double)
+declare i16 @llvm.fptosi.sat.i16.f64 (double)
+declare i19 @llvm.fptosi.sat.i19.f64 (double)
+declare i32 @llvm.fptosi.sat.i32.f64 (double)
+declare i50 @llvm.fptosi.sat.i50.f64 (double)
+declare i64 @llvm.fptosi.sat.i64.f64 (double)
+declare i100 @llvm.fptosi.sat.i100.f64(double)
+declare i128 @llvm.fptosi.sat.i128.f64(double)
+
+define i1 @test_signed_i1_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i1_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fmov d1, #-1.00000000
+; CHECK-NEXT: fmov d2, xzr
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: and w0, w8, #0x1
+; CHECK-NEXT: ret
+ %x = call i1 @llvm.fptosi.sat.i1.f64(double %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i8_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4584664420663164928
+; CHECK-NEXT: mov x9, #211106232532992
+; CHECK-NEXT: movk x9, #16479, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i8 @llvm.fptosi.sat.i8.f64(double %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i13_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4562146422526312448
+; CHECK-NEXT: mov x9, #279275953455104
+; CHECK-NEXT: movk x9, #16559, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i13 @llvm.fptosi.sat.i13.f64(double %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i16_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4548635623644200960
+; CHECK-NEXT: mov x9, #281200098803712
+; CHECK-NEXT: movk x9, #16607, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i16 @llvm.fptosi.sat.i16.f64(double %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i19_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4535124824762089472
+; CHECK-NEXT: mov x9, #281440616972288
+; CHECK-NEXT: movk x9, #16655, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i19 @llvm.fptosi.sat.i19.f64(double %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i32_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i32 @llvm.fptosi.sat.i32.f64(double %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i50_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4395513236313604096
+; CHECK-NEXT: mov x9, #-16
+; CHECK-NEXT: movk x9, #17151, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs x8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call i50 @llvm.fptosi.sat.i50.f64(double %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i64_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x9, #-4332462841530417152
+; CHECK-NEXT: mov x11, #4890909195324358655
+; CHECK-NEXT: fmov d1, x9
+; CHECK-NEXT: fcvtzs x8, d0
+; CHECK-NEXT: mov x10, #-9223372036854775808
+; CHECK-NEXT: fcmp d0, d1
+; CHECK-NEXT: fmov d1, x11
+; CHECK-NEXT: mov x12, #9223372036854775807
+; CHECK-NEXT: csel x8, x10, x8, lt
+; CHECK-NEXT: fcmp d0, d1
+; CHECK-NEXT: csel x8, x12, x8, gt
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call i64 @llvm.fptosi.sat.i64.f64(double %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i100_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixdfti
+; CHECK-NEXT: mov x8, #-4170333254945079296
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov x8, #5053038781909696511
+; CHECK-NEXT: fcmp d8, d0
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov x8, #-34359738368
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: csel x8, x8, x1, lt
+; CHECK-NEXT: mov x9, #34359738367
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp d8, d0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: csinv x9, x10, xzr, le
+; CHECK-NEXT: fcmp d8, d8
+; CHECK-NEXT: csel x0, xzr, x9, vs
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i100 @llvm.fptosi.sat.i100.f64(double %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f64(double %f) nounwind {
+; CHECK-LABEL: test_signed_i128_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixdfti
+; CHECK-NEXT: mov x8, #-4044232465378705408
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov x8, #5179139571476070399
+; CHECK-NEXT: fcmp d8, d0
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov x8, #-9223372036854775808
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: csel x8, x8, x1, lt
+; CHECK-NEXT: mov x9, #9223372036854775807
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp d8, d0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: csinv x9, x10, xzr, le
+; CHECK-NEXT: fcmp d8, d8
+; CHECK-NEXT: csel x0, xzr, x9, vs
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i128 @llvm.fptosi.sat.i128.f64(double %f)
+ ret i128 %x
+}
+
+;
+; 16-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f16 (half)
+declare i8 @llvm.fptosi.sat.i8.f16 (half)
+declare i13 @llvm.fptosi.sat.i13.f16 (half)
+declare i16 @llvm.fptosi.sat.i16.f16 (half)
+declare i19 @llvm.fptosi.sat.i19.f16 (half)
+declare i32 @llvm.fptosi.sat.i32.f16 (half)
+declare i50 @llvm.fptosi.sat.i50.f16 (half)
+declare i64 @llvm.fptosi.sat.i64.f16 (half)
+declare i100 @llvm.fptosi.sat.i100.f16(half)
+declare i128 @llvm.fptosi.sat.i128.f16(half)
+
+define i1 @test_signed_i1_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i1_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, #-1.00000000
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: and w0, w8, #0x1
+; CHECK-NEXT: ret
+ %x = call i1 @llvm.fptosi.sat.i1.f16(half %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i8_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-1023410176
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: mov w9, #1123942400
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i8 @llvm.fptosi.sat.i8.f16(half %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i13_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-981467136
+; CHECK-NEXT: mov w9, #61440
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: movk w9, #17791, lsl #16
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i13 @llvm.fptosi.sat.i13.f16(half %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i16_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-956301312
+; CHECK-NEXT: mov w9, #65024
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: movk w9, #18175, lsl #16
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i16 @llvm.fptosi.sat.i16.f16(half %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i19_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-931135488
+; CHECK-NEXT: mov w9, #65472
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: movk w9, #18559, lsl #16
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fmaxnm s1, s0, s1
+; CHECK-NEXT: fmov s2, w9
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i19 @llvm.fptosi.sat.i19.f16(half %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i32_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: mov w8, #1325400063
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: mov w9, #2147483647
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel w8, w9, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call i32 @llvm.fptosi.sat.i32.f16(half %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i50_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-671088640
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: mov w8, #1476395007
+; CHECK-NEXT: mov x9, #-562949953421312
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzs x8, s0
+; CHECK-NEXT: csel x8, x9, x8, lt
+; CHECK-NEXT: mov x9, #562949953421311
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call i50 @llvm.fptosi.sat.i50.f16(half %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i64_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-553648128
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: mov w8, #1593835519
+; CHECK-NEXT: mov x9, #-9223372036854775808
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzs x8, s0
+; CHECK-NEXT: csel x8, x9, x8, lt
+; CHECK-NEXT: mov x9, #9223372036854775807
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call i64 @llvm.fptosi.sat.i64.f16(half %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i100_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: mov w8, #-251658240
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov w8, #1895825407
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov x8, #-34359738368
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: csel x8, x8, x1, lt
+; CHECK-NEXT: mov x9, #34359738367
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: csinv x9, x10, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: csel x0, xzr, x9, vs
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i100 @llvm.fptosi.sat.i100.f16(half %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f16(half %f) nounwind {
+; CHECK-LABEL: test_signed_i128_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: mov w8, #-16777216
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov w8, #2130706431
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov x8, #-9223372036854775808
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: csel x8, x8, x1, lt
+; CHECK-NEXT: mov x9, #9223372036854775807
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: csinv x9, x10, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: csel x0, xzr, x9, vs
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i128 @llvm.fptosi.sat.i128.f16(half %f)
+ ret i128 %x
+}
diff --git a/llvm/test/CodeGen/AArch64/fptosi-sat-vector.ll b/llvm/test/CodeGen/AArch64/fptosi-sat-vector.ll
new file mode 100644
index 000000000000..d0a9c4ddd67f
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/fptosi-sat-vector.ll
@@ -0,0 +1,2807 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=aarch64 < %s | FileCheck %s
+
+;
+; Float to signed 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptosi.sat.v1f32.v1i32 (<1 x float>)
+declare <2 x i32> @llvm.fptosi.sat.v2f32.v2i32 (<2 x float>)
+declare <3 x i32> @llvm.fptosi.sat.v3f32.v3i32 (<3 x float>)
+declare <4 x i32> @llvm.fptosi.sat.v4f32.v4i32 (<4 x float>)
+declare <5 x i32> @llvm.fptosi.sat.v5f32.v5i32 (<5 x float>)
+declare <6 x i32> @llvm.fptosi.sat.v6f32.v6i32 (<6 x float>)
+declare <7 x i32> @llvm.fptosi.sat.v7f32.v7i32 (<7 x float>)
+declare <8 x i32> @llvm.fptosi.sat.v8f32.v8i32 (<8 x float>)
+
+define <1 x i32> @test_signed_v1f32_v1i32(<1 x float> %f) {
+; CHECK-LABEL: test_signed_v1f32_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptosi.sat.v1f32.v1i32(<1 x float> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_signed_v2f32_v2i32(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptosi.sat.v2f32.v2i32(<2 x float> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_signed_v3f32_v3i32(<3 x float> %f) {
+; CHECK-LABEL: test_signed_v3f32_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: mov s3, v0.s[2]
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov s1, v0.s[3]
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: csel w8, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptosi.sat.v3f32.v3i32(<3 x float> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_signed_v4f32_v4i32(<4 x float> %f) {
+; CHECK-LABEL: test_signed_v4f32_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: mov s3, v0.s[2]
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov s1, v0.s[3]
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: csel w8, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptosi.sat.v4f32.v4i32(<4 x float> %f)
+ ret <4 x i32> %x
+}
+
+define <5 x i32> @test_signed_v5f32_v5i32(<5 x float> %f) {
+; CHECK-LABEL: test_signed_v5f32_v5i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #-822083584
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fmov s5, w9
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: mov w10, #-2147483648
+; CHECK-NEXT: fmov s6, w11
+; CHECK-NEXT: fcmp s0, s5
+; CHECK-NEXT: mov w12, #2147483647
+; CHECK-NEXT: csel w8, w10, w8, lt
+; CHECK-NEXT: fcmp s0, s6
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: fcvtzs w13, s1
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s5
+; CHECK-NEXT: csel w8, w10, w13, lt
+; CHECK-NEXT: fcmp s1, s6
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w14, s2
+; CHECK-NEXT: csel w1, wzr, w8, vs
+; CHECK-NEXT: fcmp s2, s5
+; CHECK-NEXT: csel w8, w10, w14, lt
+; CHECK-NEXT: fcmp s2, s6
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fcvtzs w9, s3
+; CHECK-NEXT: csel w2, wzr, w8, vs
+; CHECK-NEXT: fcmp s3, s5
+; CHECK-NEXT: csel w8, w10, w9, lt
+; CHECK-NEXT: fcmp s3, s6
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: fcvtzs w11, s4
+; CHECK-NEXT: csel w3, wzr, w8, vs
+; CHECK-NEXT: fcmp s4, s5
+; CHECK-NEXT: csel w8, w10, w11, lt
+; CHECK-NEXT: fcmp s4, s6
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: csel w4, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call <5 x i32> @llvm.fptosi.sat.v5f32.v5i32(<5 x float> %f)
+ ret <5 x i32> %x
+}
+
+define <6 x i32> @test_signed_v6f32_v6i32(<6 x float> %f) {
+; CHECK-LABEL: test_signed_v6f32_v6i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #-822083584
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fmov s6, w9
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov w10, #-2147483648
+; CHECK-NEXT: fcmp s5, s6
+; CHECK-NEXT: fmov s7, w11
+; CHECK-NEXT: mov w12, #2147483647
+; CHECK-NEXT: csel w8, w10, w8, lt
+; CHECK-NEXT: fcmp s5, s7
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: fcvtzs w13, s4
+; CHECK-NEXT: csel w5, wzr, w8, vs
+; CHECK-NEXT: fcmp s4, s6
+; CHECK-NEXT: csel w8, w10, w13, lt
+; CHECK-NEXT: fcmp s4, s7
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvtzs w14, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s6
+; CHECK-NEXT: csel w13, w10, w14, lt
+; CHECK-NEXT: fcmp s0, s7
+; CHECK-NEXT: csel w13, w12, w13, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w0, wzr, w13, vs
+; CHECK-NEXT: fcmp s1, s6
+; CHECK-NEXT: csel w9, w10, w9, lt
+; CHECK-NEXT: fcmp s1, s7
+; CHECK-NEXT: csel w9, w12, w9, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w11, s2
+; CHECK-NEXT: csel w1, wzr, w9, vs
+; CHECK-NEXT: fcmp s2, s6
+; CHECK-NEXT: csel w9, w10, w11, lt
+; CHECK-NEXT: fcmp s2, s7
+; CHECK-NEXT: csel w9, w12, w9, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fmov s4, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: csel w2, wzr, w9, vs
+; CHECK-NEXT: fcmp s3, s6
+; CHECK-NEXT: csel w8, w10, w8, lt
+; CHECK-NEXT: fcmp s3, s7
+; CHECK-NEXT: mov v4.s[1], w5
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: csel w3, wzr, w8, vs
+; CHECK-NEXT: fmov w4, s4
+; CHECK-NEXT: ret
+ %x = call <6 x i32> @llvm.fptosi.sat.v6f32.v6i32(<6 x float> %f)
+ ret <6 x i32> %x
+}
+
+define <7 x i32> @test_signed_v7f32_v7i32(<7 x float> %f) {
+; CHECK-LABEL: test_signed_v7f32_v7i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #-822083584
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fmov s7, w9
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov w10, #-2147483648
+; CHECK-NEXT: fcmp s5, s7
+; CHECK-NEXT: fmov s16, w11
+; CHECK-NEXT: mov w12, #2147483647
+; CHECK-NEXT: csel w8, w10, w8, lt
+; CHECK-NEXT: fcmp s5, s16
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: fcvtzs w13, s4
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s4, s7
+; CHECK-NEXT: csel w11, w10, w13, lt
+; CHECK-NEXT: fcmp s4, s16
+; CHECK-NEXT: csel w11, w12, w11, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvtzs w14, s6
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: fcmp s6, s7
+; CHECK-NEXT: csel w14, w10, w14, lt
+; CHECK-NEXT: fcmp s6, s16
+; CHECK-NEXT: csel w14, w12, w14, gt
+; CHECK-NEXT: fcmp s6, s6
+; CHECK-NEXT: fcvtzs w9, s0
+; CHECK-NEXT: csel w6, wzr, w14, vs
+; CHECK-NEXT: fcmp s0, s7
+; CHECK-NEXT: csel w9, w10, w9, lt
+; CHECK-NEXT: fcmp s0, s16
+; CHECK-NEXT: csel w9, w12, w9, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: fcvtzs w13, s1
+; CHECK-NEXT: csel w0, wzr, w9, vs
+; CHECK-NEXT: fcmp s1, s7
+; CHECK-NEXT: csel w9, w10, w13, lt
+; CHECK-NEXT: fcmp s1, s16
+; CHECK-NEXT: csel w9, w12, w9, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmov s4, w11
+; CHECK-NEXT: fcvtzs w11, s2
+; CHECK-NEXT: csel w1, wzr, w9, vs
+; CHECK-NEXT: fcmp s2, s7
+; CHECK-NEXT: csel w9, w10, w11, lt
+; CHECK-NEXT: fcmp s2, s16
+; CHECK-NEXT: csel w9, w12, w9, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov v4.s[1], w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: csel w2, wzr, w9, vs
+; CHECK-NEXT: fcmp s3, s7
+; CHECK-NEXT: csel w8, w10, w8, lt
+; CHECK-NEXT: fcmp s3, s16
+; CHECK-NEXT: mov v4.s[2], w6
+; CHECK-NEXT: csel w8, w12, w8, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: csel w3, wzr, w8, vs
+; CHECK-NEXT: mov w5, v4.s[1]
+; CHECK-NEXT: fmov w4, s4
+; CHECK-NEXT: ret
+ %x = call <7 x i32> @llvm.fptosi.sat.v7f32.v7i32(<7 x float> %f)
+ ret <7 x i32> %x
+}
+
+define <8 x i32> @test_signed_v8f32_v8i32(<8 x float> %f) {
+; CHECK-LABEL: test_signed_v8f32_v8i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w10, #-822083584
+; CHECK-NEXT: mov s3, v0.s[1]
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fmov s2, w10
+; CHECK-NEXT: mov w8, #-2147483648
+; CHECK-NEXT: fmov s5, w11
+; CHECK-NEXT: fcvtzs w11, s3
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: mov w9, #2147483647
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s3, s5
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: fcvtzs w10, s0
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s0, s5
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: mov s4, v0.s[2]
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: mov s3, v0.s[3]
+; CHECK-NEXT: fmov s0, w10
+; CHECK-NEXT: fcvtzs w10, s4
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s4, s5
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: mov v0.s[1], w11
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: mov v0.s[2], w10
+; CHECK-NEXT: fcvtzs w10, s3
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s3, s5
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: mov s4, v1.s[1]
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: mov v0.s[3], w10
+; CHECK-NEXT: fcvtzs w10, s4
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s4, s5
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvtzs w11, s1
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s1, s5
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov s3, v1.s[2]
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: mov s4, v1.s[3]
+; CHECK-NEXT: fmov s1, w11
+; CHECK-NEXT: fcvtzs w11, s3
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s3, s5
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: mov v1.s[1], w10
+; CHECK-NEXT: fcvtzs w10, s4
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: csel w8, w8, w10, lt
+; CHECK-NEXT: fcmp s4, s5
+; CHECK-NEXT: csel w8, w9, w8, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: mov v1.s[2], w11
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v1.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <8 x i32> @llvm.fptosi.sat.v8f32.v8i32(<8 x float> %f)
+ ret <8 x i32> %x
+}
+
+;
+; Double to signed 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptosi.sat.v1f64.v1i32 (<1 x double>)
+declare <2 x i32> @llvm.fptosi.sat.v2f64.v2i32 (<2 x double>)
+declare <3 x i32> @llvm.fptosi.sat.v3f64.v3i32 (<3 x double>)
+declare <4 x i32> @llvm.fptosi.sat.v4f64.v4i32 (<4 x double>)
+declare <5 x i32> @llvm.fptosi.sat.v5f64.v5i32 (<5 x double>)
+declare <6 x i32> @llvm.fptosi.sat.v6f64.v6i32 (<6 x double>)
+
+define <1 x i32> @test_signed_v1f64_v1i32(<1 x double> %f) {
+; CHECK-LABEL: test_signed_v1f64_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fmaxnm d1, d0, d1
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptosi.sat.v1f64.v1i32(<1 x double> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_signed_v2f64_v2i32(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptosi.sat.v2f64.v2i32(<2 x double> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_signed_v3f64_v3i32(<3 x double> %f) {
+; CHECK-LABEL: test_signed_v3f64_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d1, d3
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fmaxnm d5, d0, d3
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d1
+; CHECK-NEXT: fminnm d5, d5, d4
+; CHECK-NEXT: fcvtzs w9, d5
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: fmaxnm d1, d2, d3
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmaxnm d3, d3, d0
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fminnm d3, d3, d4
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcmp d2, d2
+; CHECK-NEXT: fcvtzs w8, d3
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptosi.sat.v3f64.v3i32(<3 x double> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_signed_v4f64_v4i32(<4 x double> %f) {
+; CHECK-LABEL: test_signed_v4f64_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d4, x8
+; CHECK-NEXT: fmaxnm d5, d2, d4
+; CHECK-NEXT: fcmp d2, d2
+; CHECK-NEXT: fmov d2, x9
+; CHECK-NEXT: fminnm d5, d5, d2
+; CHECK-NEXT: fcvtzs w8, d5
+; CHECK-NEXT: fmaxnm d5, d0, d4
+; CHECK-NEXT: fminnm d5, d5, d2
+; CHECK-NEXT: mov d3, v1.d[1]
+; CHECK-NEXT: fcvtzs w9, d5
+; CHECK-NEXT: fmaxnm d5, d1, d4
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: fmaxnm d4, d3, d4
+; CHECK-NEXT: fminnm d5, d5, d2
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fminnm d2, d4, d2
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzs w9, d5
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp d3, d3
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptosi.sat.v4f64.v4i32(<4 x double> %f)
+ ret <4 x i32> %x
+}
+
+define <5 x i32> @test_signed_v5f64_v5i32(<5 x double> %f) {
+; CHECK-LABEL: test_signed_v5f64_v5i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d5, x8
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: fmaxnm d0, d0, d5
+; CHECK-NEXT: fmov d6, x9
+; CHECK-NEXT: fmaxnm d7, d1, d5
+; CHECK-NEXT: fminnm d0, d0, d6
+; CHECK-NEXT: fmaxnm d16, d2, d5
+; CHECK-NEXT: fminnm d7, d7, d6
+; CHECK-NEXT: fcvtzs w8, d0
+; CHECK-NEXT: fmaxnm d17, d3, d5
+; CHECK-NEXT: fminnm d16, d16, d6
+; CHECK-NEXT: fcvtzs w9, d7
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d5, d4, d5
+; CHECK-NEXT: fminnm d17, d17, d6
+; CHECK-NEXT: fcvtzs w10, d16
+; CHECK-NEXT: csel w1, wzr, w9, vs
+; CHECK-NEXT: fcmp d2, d2
+; CHECK-NEXT: fminnm d5, d5, d6
+; CHECK-NEXT: fcvtzs w11, d17
+; CHECK-NEXT: csel w2, wzr, w10, vs
+; CHECK-NEXT: fcmp d3, d3
+; CHECK-NEXT: fcvtzs w12, d5
+; CHECK-NEXT: csel w3, wzr, w11, vs
+; CHECK-NEXT: fcmp d4, d4
+; CHECK-NEXT: csel w4, wzr, w12, vs
+; CHECK-NEXT: ret
+ %x = call <5 x i32> @llvm.fptosi.sat.v5f64.v5i32(<5 x double> %f)
+ ret <5 x i32> %x
+}
+
+define <6 x i32> @test_signed_v6f64_v6i32(<6 x double> %f) {
+; CHECK-LABEL: test_signed_v6f64_v6i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d6, x8
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: fmaxnm d0, d0, d6
+; CHECK-NEXT: fmov d7, x9
+; CHECK-NEXT: fmaxnm d16, d1, d6
+; CHECK-NEXT: fminnm d0, d0, d7
+; CHECK-NEXT: fmaxnm d17, d2, d6
+; CHECK-NEXT: fminnm d16, d16, d7
+; CHECK-NEXT: fcvtzs w8, d0
+; CHECK-NEXT: fmaxnm d18, d3, d6
+; CHECK-NEXT: fminnm d17, d17, d7
+; CHECK-NEXT: fcvtzs w9, d16
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d19, d4, d6
+; CHECK-NEXT: fminnm d18, d18, d7
+; CHECK-NEXT: fcvtzs w10, d17
+; CHECK-NEXT: csel w1, wzr, w9, vs
+; CHECK-NEXT: fcmp d2, d2
+; CHECK-NEXT: fmaxnm d6, d5, d6
+; CHECK-NEXT: fminnm d19, d19, d7
+; CHECK-NEXT: fcvtzs w11, d18
+; CHECK-NEXT: csel w2, wzr, w10, vs
+; CHECK-NEXT: fcmp d3, d3
+; CHECK-NEXT: fminnm d6, d6, d7
+; CHECK-NEXT: fcvtzs w12, d19
+; CHECK-NEXT: csel w3, wzr, w11, vs
+; CHECK-NEXT: fcmp d4, d4
+; CHECK-NEXT: fcvtzs w13, d6
+; CHECK-NEXT: csel w4, wzr, w12, vs
+; CHECK-NEXT: fcmp d5, d5
+; CHECK-NEXT: csel w5, wzr, w13, vs
+; CHECK-NEXT: ret
+ %x = call <6 x i32> @llvm.fptosi.sat.v6f64.v6i32(<6 x double> %f)
+ ret <6 x i32> %x
+}
+
+;
+; FP128 to signed 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptosi.sat.v1f128.v1i32 (<1 x fp128>)
+declare <2 x i32> @llvm.fptosi.sat.v2f128.v2i32 (<2 x fp128>)
+declare <3 x i32> @llvm.fptosi.sat.v3f128.v3i32 (<3 x fp128>)
+declare <4 x i32> @llvm.fptosi.sat.v4f128.v4i32 (<4 x fp128>)
+
+define <1 x i32> @test_signed_v1f128_v1i32(<1 x fp128> %f) {
+; CHECK-LABEL: test_signed_v1f128_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #32 // =32
+; CHECK-NEXT: stp x30, x19, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 32
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w30, -16
+; CHECK-NEXT: adrp x8, .LCPI14_0
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI14_0]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: mov w8, #-2147483648
+; CHECK-NEXT: csel w19, w8, w0, lt
+; CHECK-NEXT: adrp x8, .LCPI14_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI14_1]
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: mov w8, #2147483647
+; CHECK-NEXT: csel w19, w8, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: ldp x30, x19, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: add sp, sp, #32 // =32
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptosi.sat.v1f128.v1i32(<1 x fp128> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_signed_v2f128_v2i32(<2 x fp128> %f) {
+; CHECK-LABEL: test_signed_v2f128_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #112 // =112
+; CHECK-NEXT: str x30, [sp, #64] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #96] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 112
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -48
+; CHECK-NEXT: adrp x8, .LCPI15_0
+; CHECK-NEXT: mov v2.16b, v1.16b
+; CHECK-NEXT: stp q1, q0, [sp, #32] // 32-byte Folded Spill
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI15_0]
+; CHECK-NEXT: mov v0.16b, v2.16b
+; CHECK-NEXT: str q1, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: adrp x8, .LCPI15_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI15_1]
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: mov w20, #-2147483648
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: str q1, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: mov w21, #2147483647
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w22, wzr, w19, ne
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w22
+; CHECK-NEXT: ldp x20, x19, [sp, #96] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #64] // 8-byte Folded Reload
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: add sp, sp, #112 // =112
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptosi.sat.v2f128.v2i32(<2 x fp128> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_signed_v3f128_v3i32(<3 x fp128> %f) {
+; CHECK-LABEL: test_signed_v3f128_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #128 // =128
+; CHECK-NEXT: str x30, [sp, #80] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #96] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #112] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 128
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -48
+; CHECK-NEXT: adrp x8, .LCPI16_0
+; CHECK-NEXT: stp q0, q2, [sp, #48] // 32-byte Folded Spill
+; CHECK-NEXT: mov v2.16b, v1.16b
+; CHECK-NEXT: str q1, [sp, #32] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI16_0]
+; CHECK-NEXT: mov v0.16b, v2.16b
+; CHECK-NEXT: str q1, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: adrp x8, .LCPI16_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI16_1]
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: mov w20, #-2147483648
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: str q1, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: mov w21, #2147483647
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w22, wzr, w19, ne
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w22
+; CHECK-NEXT: str q0, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: ldp x20, x19, [sp, #112] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #96] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #80] // 8-byte Folded Reload
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: add sp, sp, #128 // =128
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptosi.sat.v3f128.v3i32(<3 x fp128> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_signed_v4f128_v4i32(<4 x fp128> %f) {
+; CHECK-LABEL: test_signed_v4f128_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #144 // =144
+; CHECK-NEXT: str x30, [sp, #96] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #112] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #128] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 144
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -48
+; CHECK-NEXT: adrp x8, .LCPI17_0
+; CHECK-NEXT: stp q2, q3, [sp, #64] // 32-byte Folded Spill
+; CHECK-NEXT: mov v2.16b, v1.16b
+; CHECK-NEXT: str q1, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI17_0]
+; CHECK-NEXT: str q0, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v2.16b
+; CHECK-NEXT: str q1, [sp, #32] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: adrp x8, .LCPI17_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI17_1]
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: mov w20, #-2147483648
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: str q1, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: mov w21, #2147483647
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: ldp q1, q0, [sp, #32] // 32-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w22, wzr, w19, ne
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w22
+; CHECK-NEXT: str q0, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: ldp q1, q0, [sp, #32] // 32-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: str q0, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q0, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixtfsi
+; CHECK-NEXT: ldr q0, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, w20, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csel w19, w21, w19, gt
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: bl __unordtf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: csel w8, wzr, w19, ne
+; CHECK-NEXT: ldp x20, x19, [sp, #128] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #112] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #96] // 8-byte Folded Reload
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: add sp, sp, #144 // =144
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptosi.sat.v4f128.v4i32(<4 x fp128> %f)
+ ret <4 x i32> %x
+}
+
+;
+; FP16 to signed 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptosi.sat.v1f16.v1i32 (<1 x half>)
+declare <2 x i32> @llvm.fptosi.sat.v2f16.v2i32 (<2 x half>)
+declare <3 x i32> @llvm.fptosi.sat.v3f16.v3i32 (<3 x half>)
+declare <4 x i32> @llvm.fptosi.sat.v4f16.v4i32 (<4 x half>)
+declare <5 x i32> @llvm.fptosi.sat.v5f16.v5i32 (<5 x half>)
+declare <6 x i32> @llvm.fptosi.sat.v6f16.v6i32 (<6 x half>)
+declare <7 x i32> @llvm.fptosi.sat.v7f16.v7i32 (<7 x half>)
+declare <8 x i32> @llvm.fptosi.sat.v8f16.v8i32 (<8 x half>)
+
+define <1 x i32> @test_signed_v1f16_v1i32(<1 x half> %f) {
+; CHECK-LABEL: test_signed_v1f16_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: mov w8, #1325400063
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: mov w9, #2147483647
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csel w8, w9, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptosi.sat.v1f16.v1i32(<1 x half> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_signed_v2f16_v2i32(<2 x half> %f) {
+; CHECK-LABEL: test_signed_v2f16_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptosi.sat.v2f16.v2i32(<2 x half> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_signed_v3f16_v3i32(<3 x half> %f) {
+; CHECK-LABEL: test_signed_v3f16_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w10, s2
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s0, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov v0.s[2], w10
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptosi.sat.v3f16.v3i32(<3 x half> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_signed_v4f16_v4i32(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w10, s2
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s0, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov v0.s[2], w10
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptosi.sat.v4f16.v4i32(<4 x half> %f)
+ ret <4 x i32> %x
+}
+
+define <5 x i32> @test_signed_v5f16_v5i32(<5 x half> %f) {
+; CHECK-LABEL: test_signed_v5f16_v5i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fcvtzs w12, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w8, w9, w12, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w0, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: csel w8, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w1, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: csel w8, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov h1, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w2, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: ext v0.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: csel w8, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w12, s0
+; CHECK-NEXT: csel w3, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w12, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w4, wzr, w8, vs
+; CHECK-NEXT: ret
+ %x = call <5 x i32> @llvm.fptosi.sat.v5f16.v5i32(<5 x half> %f)
+ ret <5 x i32> %x
+}
+
+define <6 x i32> @test_signed_v6f16_v6i32(<6 x half> %f) {
+; CHECK-LABEL: test_signed_v6f16_v6i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov h2, v1.h[1]
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w5, wzr, w8, vs
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w8, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w10, s2
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs w12, s1
+; CHECK-NEXT: csel w0, wzr, w10, vs
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h2, v0.h[2]
+; CHECK-NEXT: csel w10, w9, w12, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w13, s2
+; CHECK-NEXT: csel w1, wzr, w10, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: csel w10, w9, w13, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w2, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s4
+; CHECK-NEXT: mov v1.s[1], w5
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w3, wzr, w8, vs
+; CHECK-NEXT: fmov w4, s1
+; CHECK-NEXT: ret
+ %x = call <6 x i32> @llvm.fptosi.sat.v6f16.v6i32(<6 x half> %f)
+ ret <6 x i32> %x
+}
+
+define <7 x i32> @test_signed_v7f16_v7i32(<7 x half> %f) {
+; CHECK-LABEL: test_signed_v7f16_v7i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: mov w10, #-822083584
+; CHECK-NEXT: mov h4, v3.h[1]
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fmov s2, w10
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: mov w8, #-2147483648
+; CHECK-NEXT: fmov s1, w11
+; CHECK-NEXT: fcvtzs w10, s4
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: mov w9, #2147483647
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s4, s1
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvt s4, h3
+; CHECK-NEXT: fcvtzs w11, s4
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s4, s1
+; CHECK-NEXT: mov h3, v3.h[2]
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvtzs w12, s3
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: csel w12, w8, w12, lt
+; CHECK-NEXT: fcmp s3, s1
+; CHECK-NEXT: fcvt s4, h0
+; CHECK-NEXT: csel w12, w9, w12, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: fcvtzs w13, s4
+; CHECK-NEXT: csel w6, wzr, w12, vs
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: mov h3, v0.h[1]
+; CHECK-NEXT: csel w12, w8, w13, lt
+; CHECK-NEXT: fcmp s4, s1
+; CHECK-NEXT: csel w12, w9, w12, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvtzs w13, s3
+; CHECK-NEXT: csel w0, wzr, w12, vs
+; CHECK-NEXT: fcmp s3, s2
+; CHECK-NEXT: mov h4, v0.h[2]
+; CHECK-NEXT: csel w12, w8, w13, lt
+; CHECK-NEXT: fcmp s3, s1
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: csel w12, w9, w12, gt
+; CHECK-NEXT: fcmp s3, s3
+; CHECK-NEXT: fmov s3, w11
+; CHECK-NEXT: fcvtzs w11, s4
+; CHECK-NEXT: csel w1, wzr, w12, vs
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s4, s1
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: mov v3.s[1], w10
+; CHECK-NEXT: fcvtzs w10, s0
+; CHECK-NEXT: csel w2, wzr, w11, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w8, w10, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: mov v3.s[2], w6
+; CHECK-NEXT: csel w8, w9, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w3, wzr, w8, vs
+; CHECK-NEXT: mov w5, v3.s[1]
+; CHECK-NEXT: fmov w4, s3
+; CHECK-NEXT: ret
+ %x = call <7 x i32> @llvm.fptosi.sat.v7f16.v7i32(<7 x half> %f)
+ ret <7 x i32> %x
+}
+
+define <8 x i32> @test_signed_v8f16_v8i32(<8 x half> %f) {
+; CHECK-LABEL: test_signed_v8f16_v8i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w10, #-822083584
+; CHECK-NEXT: mov w11, #1325400063
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: mov w8, #-2147483648
+; CHECK-NEXT: fmov s2, w11
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov w9, #2147483647
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvt s4, h0
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w11, s4
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s4, s3
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: mov h5, v0.h[2]
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: mov h1, v0.h[3]
+; CHECK-NEXT: ext v6.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: fmov s0, w11
+; CHECK-NEXT: fcvtzs w11, s5
+; CHECK-NEXT: fcmp s5, s3
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s5, s2
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h4, v6.h[1]
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: mov v0.s[2], w11
+; CHECK-NEXT: fcvtzs w11, s4
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s4, s3
+; CHECK-NEXT: csel w11, w8, w11, lt
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: fcvt s1, h6
+; CHECK-NEXT: csel w11, w9, w11, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: mov v0.s[3], w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: csel w11, wzr, w11, vs
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h4, v6.h[2]
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fmov s1, w10
+; CHECK-NEXT: fcvtzs w10, s4
+; CHECK-NEXT: fcmp s4, s3
+; CHECK-NEXT: mov h5, v6.h[3]
+; CHECK-NEXT: csel w10, w8, w10, lt
+; CHECK-NEXT: fcmp s4, s2
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: csel w10, w9, w10, gt
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: mov v1.s[1], w11
+; CHECK-NEXT: fcvtzs w11, s5
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s5, s3
+; CHECK-NEXT: csel w8, w8, w11, lt
+; CHECK-NEXT: fcmp s5, s2
+; CHECK-NEXT: csel w8, w9, w8, gt
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v1.s[2], w10
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v1.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <8 x i32> @llvm.fptosi.sat.v8f16.v8i32(<8 x half> %f)
+ ret <8 x i32> %x
+}
+
+;
+; 2-Vector float to signed integer -- result size variation
+;
+
+declare <2 x i1> @llvm.fptosi.sat.v2f32.v2i1 (<2 x float>)
+declare <2 x i8> @llvm.fptosi.sat.v2f32.v2i8 (<2 x float>)
+declare <2 x i13> @llvm.fptosi.sat.v2f32.v2i13 (<2 x float>)
+declare <2 x i16> @llvm.fptosi.sat.v2f32.v2i16 (<2 x float>)
+declare <2 x i19> @llvm.fptosi.sat.v2f32.v2i19 (<2 x float>)
+declare <2 x i50> @llvm.fptosi.sat.v2f32.v2i50 (<2 x float>)
+declare <2 x i64> @llvm.fptosi.sat.v2f32.v2i64 (<2 x float>)
+declare <2 x i100> @llvm.fptosi.sat.v2f32.v2i100(<2 x float>)
+declare <2 x i128> @llvm.fptosi.sat.v2f32.v2i128(<2 x float>)
+
+define <2 x i1> @test_signed_v2f32_v2i1(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i1:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: fmov s2, #-1.00000000
+; CHECK-NEXT: fmov s3, wzr
+; CHECK-NEXT: fmaxnm s4, s1, s2
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmaxnm s1, s0, s2
+; CHECK-NEXT: fminnm s2, s4, s3
+; CHECK-NEXT: fminnm s1, s1, s3
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i1> @llvm.fptosi.sat.v2f32.v2i1(<2 x float> %f)
+ ret <2 x i1> %x
+}
+
+define <2 x i8> @test_signed_v2f32_v2i8(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i8:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-1023410176
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w9, #1123942400
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fmaxnm s3, s1, s2
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmaxnm s1, s0, s2
+; CHECK-NEXT: fminnm s2, s3, s4
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i8> @llvm.fptosi.sat.v2f32.v2i8(<2 x float> %f)
+ ret <2 x i8> %x
+}
+
+define <2 x i13> @test_signed_v2f32_v2i13(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i13:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-981467136
+; CHECK-NEXT: mov w9, #61440
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: movk w9, #17791, lsl #16
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fmaxnm s3, s1, s2
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmaxnm s1, s0, s2
+; CHECK-NEXT: fminnm s2, s3, s4
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i13> @llvm.fptosi.sat.v2f32.v2i13(<2 x float> %f)
+ ret <2 x i13> %x
+}
+
+define <2 x i16> @test_signed_v2f32_v2i16(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-956301312
+; CHECK-NEXT: mov w9, #65024
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: movk w9, #18175, lsl #16
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fmaxnm s3, s1, s2
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmaxnm s1, s0, s2
+; CHECK-NEXT: fminnm s2, s3, s4
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i16> @llvm.fptosi.sat.v2f32.v2i16(<2 x float> %f)
+ ret <2 x i16> %x
+}
+
+define <2 x i19> @test_signed_v2f32_v2i19(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i19:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-931135488
+; CHECK-NEXT: mov w9, #65472
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: movk w9, #18559, lsl #16
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fmaxnm s3, s1, s2
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmaxnm s1, s0, s2
+; CHECK-NEXT: fminnm s2, s3, s4
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i19> @llvm.fptosi.sat.v2f32.v2i19(<2 x float> %f)
+ ret <2 x i19> %x
+}
+
+define <2 x i32> @test_signed_v2f32_v2i32_duplicate(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i32_duplicate:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w8, s0
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w10
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptosi.sat.v2f32.v2i32(<2 x float> %f)
+ ret <2 x i32> %x
+}
+
+define <2 x i50> @test_signed_v2f32_v2i50(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i50:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-671088640
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1476395007
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov x9, #-562949953421312
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: fcvtzs x10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov x11, #562949953421311
+; CHECK-NEXT: csel x10, x9, x10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel x10, x11, x10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs x8, s0
+; CHECK-NEXT: csel x10, xzr, x10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel x8, x9, x8, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x8, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x10
+; CHECK-NEXT: ret
+ %x = call <2 x i50> @llvm.fptosi.sat.v2f32.v2i50(<2 x float> %f)
+ ret <2 x i50> %x
+}
+
+define <2 x i64> @test_signed_v2f32_v2i64(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-553648128
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w10, #1593835519
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov x9, #-9223372036854775808
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: fcvtzs x10, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov x11, #9223372036854775807
+; CHECK-NEXT: csel x10, x9, x10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel x10, x11, x10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs x8, s0
+; CHECK-NEXT: csel x10, xzr, x10, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel x8, x9, x8, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x8, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x10
+; CHECK-NEXT: ret
+ %x = call <2 x i64> @llvm.fptosi.sat.v2f32.v2i64(<2 x float> %f)
+ ret <2 x i64> %x
+}
+
+define <2 x i100> @test_signed_v2f32_v2i100(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i100:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #80 // =80
+; CHECK-NEXT: str d10, [sp, #16] // 8-byte Folded Spill
+; CHECK-NEXT: stp d9, d8, [sp, #24] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #40] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 80
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -40
+; CHECK-NEXT: .cfi_offset b8, -48
+; CHECK-NEXT: .cfi_offset b9, -56
+; CHECK-NEXT: .cfi_offset b10, -64
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s8, v0.s[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: mov w8, #-251658240
+; CHECK-NEXT: mov w9, #1895825407
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov x21, #-34359738368
+; CHECK-NEXT: fmov s10, w9
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: mov x22, #34359738367
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x21, x1, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csel x9, x22, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csel x19, xzr, x8, vs
+; CHECK-NEXT: csel x20, xzr, x9, vs
+; CHECK-NEXT: // kill: def $s0 killed $s0 killed $q0
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: ldp x20, x19, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s0, s9
+; CHECK-NEXT: csel x8, x21, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp s0, s10
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: csel x8, x22, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: ldp x22, x21, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #40] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #24] // 16-byte Folded Reload
+; CHECK-NEXT: ldr d10, [sp, #16] // 8-byte Folded Reload
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #80 // =80
+; CHECK-NEXT: ret
+ %x = call <2 x i100> @llvm.fptosi.sat.v2f32.v2i100(<2 x float> %f)
+ ret <2 x i100> %x
+}
+
+define <2 x i128> @test_signed_v2f32_v2i128(<2 x float> %f) {
+; CHECK-LABEL: test_signed_v2f32_v2i128:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #80 // =80
+; CHECK-NEXT: str d10, [sp, #16] // 8-byte Folded Spill
+; CHECK-NEXT: stp d9, d8, [sp, #24] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #40] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 80
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -40
+; CHECK-NEXT: .cfi_offset b8, -48
+; CHECK-NEXT: .cfi_offset b9, -56
+; CHECK-NEXT: .cfi_offset b10, -64
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s8, v0.s[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: mov w8, #-16777216
+; CHECK-NEXT: mov w9, #2130706431
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov x21, #-9223372036854775808
+; CHECK-NEXT: fmov s10, w9
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: mov x22, #9223372036854775807
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x21, x1, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csel x9, x22, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csel x19, xzr, x8, vs
+; CHECK-NEXT: csel x20, xzr, x9, vs
+; CHECK-NEXT: // kill: def $s0 killed $s0 killed $q0
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: ldp x20, x19, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s0, s9
+; CHECK-NEXT: csel x8, x21, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp s0, s10
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: csel x8, x22, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: ldp x22, x21, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #40] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #24] // 16-byte Folded Reload
+; CHECK-NEXT: ldr d10, [sp, #16] // 8-byte Folded Reload
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #80 // =80
+; CHECK-NEXT: ret
+ %x = call <2 x i128> @llvm.fptosi.sat.v2f32.v2i128(<2 x float> %f)
+ ret <2 x i128> %x
+}
+
+;
+; 2-Vector double to signed integer -- result size variation
+;
+
+declare <2 x i1> @llvm.fptosi.sat.v2f64.v2i1 (<2 x double>)
+declare <2 x i8> @llvm.fptosi.sat.v2f64.v2i8 (<2 x double>)
+declare <2 x i13> @llvm.fptosi.sat.v2f64.v2i13 (<2 x double>)
+declare <2 x i16> @llvm.fptosi.sat.v2f64.v2i16 (<2 x double>)
+declare <2 x i19> @llvm.fptosi.sat.v2f64.v2i19 (<2 x double>)
+declare <2 x i50> @llvm.fptosi.sat.v2f64.v2i50 (<2 x double>)
+declare <2 x i64> @llvm.fptosi.sat.v2f64.v2i64 (<2 x double>)
+declare <2 x i100> @llvm.fptosi.sat.v2f64.v2i100(<2 x double>)
+declare <2 x i128> @llvm.fptosi.sat.v2f64.v2i128(<2 x double>)
+
+define <2 x i1> @test_signed_v2f64_v2i1(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i1:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: fmov d2, #-1.00000000
+; CHECK-NEXT: fmov d3, xzr
+; CHECK-NEXT: fmaxnm d4, d1, d2
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d4, d3
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i1> @llvm.fptosi.sat.v2f64.v2i1(<2 x double> %f)
+ ret <2 x i1> %x
+}
+
+define <2 x i8> @test_signed_v2f64_v2i8(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i8:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4584664420663164928
+; CHECK-NEXT: mov x9, #211106232532992
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #16479, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i8> @llvm.fptosi.sat.v2f64.v2i8(<2 x double> %f)
+ ret <2 x i8> %x
+}
+
+define <2 x i13> @test_signed_v2f64_v2i13(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i13:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4562146422526312448
+; CHECK-NEXT: mov x9, #279275953455104
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #16559, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i13> @llvm.fptosi.sat.v2f64.v2i13(<2 x double> %f)
+ ret <2 x i13> %x
+}
+
+define <2 x i16> @test_signed_v2f64_v2i16(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4548635623644200960
+; CHECK-NEXT: mov x9, #281200098803712
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #16607, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i16> @llvm.fptosi.sat.v2f64.v2i16(<2 x double> %f)
+ ret <2 x i16> %x
+}
+
+define <2 x i19> @test_signed_v2f64_v2i19(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i19:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4535124824762089472
+; CHECK-NEXT: mov x9, #281440616972288
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #16655, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i19> @llvm.fptosi.sat.v2f64.v2i19(<2 x double> %f)
+ ret <2 x i19> %x
+}
+
+define <2 x i32> @test_signed_v2f64_v2i32_duplicate(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i32_duplicate:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4476578029606273024
+; CHECK-NEXT: mov x9, #281474972516352
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #16863, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs w8, d2
+; CHECK-NEXT: fcvtzs w9, d1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptosi.sat.v2f64.v2i32(<2 x double> %f)
+ ret <2 x i32> %x
+}
+
+define <2 x i50> @test_signed_v2f64_v2i50(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i50:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4395513236313604096
+; CHECK-NEXT: mov x9, #-16
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: movk x9, #17151, lsl #48
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fmaxnm d3, d1, d2
+; CHECK-NEXT: fmov d4, x9
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fmaxnm d1, d0, d2
+; CHECK-NEXT: fminnm d2, d3, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzs x8, d2
+; CHECK-NEXT: fcvtzs x9, d1
+; CHECK-NEXT: csel x8, xzr, x8, vs
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: ret
+ %x = call <2 x i50> @llvm.fptosi.sat.v2f64.v2i50(<2 x double> %f)
+ ret <2 x i50> %x
+}
+
+define <2 x i64> @test_signed_v2f64_v2i64(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-4332462841530417152
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: mov x10, #4890909195324358655
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: mov x9, #-9223372036854775808
+; CHECK-NEXT: fmov d3, x10
+; CHECK-NEXT: fcvtzs x10, d1
+; CHECK-NEXT: fcmp d1, d2
+; CHECK-NEXT: mov x11, #9223372036854775807
+; CHECK-NEXT: csel x10, x9, x10, lt
+; CHECK-NEXT: fcmp d1, d3
+; CHECK-NEXT: csel x10, x11, x10, gt
+; CHECK-NEXT: fcmp d1, d1
+; CHECK-NEXT: fcvtzs x8, d0
+; CHECK-NEXT: csel x10, xzr, x10, vs
+; CHECK-NEXT: fcmp d0, d2
+; CHECK-NEXT: csel x8, x9, x8, lt
+; CHECK-NEXT: fcmp d0, d3
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel x8, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x10
+; CHECK-NEXT: ret
+ %x = call <2 x i64> @llvm.fptosi.sat.v2f64.v2i64(<2 x double> %f)
+ ret <2 x i64> %x
+}
+
+define <2 x i100> @test_signed_v2f64_v2i100(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i100:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #80 // =80
+; CHECK-NEXT: str d10, [sp, #16] // 8-byte Folded Spill
+; CHECK-NEXT: stp d9, d8, [sp, #24] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #40] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 80
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -40
+; CHECK-NEXT: .cfi_offset b8, -48
+; CHECK-NEXT: .cfi_offset b9, -56
+; CHECK-NEXT: .cfi_offset b10, -64
+; CHECK-NEXT: mov d8, v0.d[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixdfti
+; CHECK-NEXT: mov x8, #-4170333254945079296
+; CHECK-NEXT: mov x9, #5053038781909696511
+; CHECK-NEXT: fmov d9, x8
+; CHECK-NEXT: mov x21, #-34359738368
+; CHECK-NEXT: fmov d10, x9
+; CHECK-NEXT: fcmp d8, d9
+; CHECK-NEXT: mov x22, #34359738367
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x21, x1, lt
+; CHECK-NEXT: fcmp d8, d10
+; CHECK-NEXT: csel x9, x22, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp d8, d8
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csel x19, xzr, x8, vs
+; CHECK-NEXT: csel x20, xzr, x9, vs
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: bl __fixdfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: ldp x20, x19, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp d0, d9
+; CHECK-NEXT: csel x8, x21, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp d0, d10
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: csel x8, x22, x8, gt
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: ldp x22, x21, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #40] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #24] // 16-byte Folded Reload
+; CHECK-NEXT: ldr d10, [sp, #16] // 8-byte Folded Reload
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #80 // =80
+; CHECK-NEXT: ret
+ %x = call <2 x i100> @llvm.fptosi.sat.v2f64.v2i100(<2 x double> %f)
+ ret <2 x i100> %x
+}
+
+define <2 x i128> @test_signed_v2f64_v2i128(<2 x double> %f) {
+; CHECK-LABEL: test_signed_v2f64_v2i128:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #80 // =80
+; CHECK-NEXT: str d10, [sp, #16] // 8-byte Folded Spill
+; CHECK-NEXT: stp d9, d8, [sp, #24] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #40] // 8-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 80
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w30, -40
+; CHECK-NEXT: .cfi_offset b8, -48
+; CHECK-NEXT: .cfi_offset b9, -56
+; CHECK-NEXT: .cfi_offset b10, -64
+; CHECK-NEXT: mov d8, v0.d[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixdfti
+; CHECK-NEXT: mov x8, #-4044232465378705408
+; CHECK-NEXT: mov x9, #5179139571476070399
+; CHECK-NEXT: fmov d9, x8
+; CHECK-NEXT: mov x21, #-9223372036854775808
+; CHECK-NEXT: fmov d10, x9
+; CHECK-NEXT: fcmp d8, d9
+; CHECK-NEXT: mov x22, #9223372036854775807
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x21, x1, lt
+; CHECK-NEXT: fcmp d8, d10
+; CHECK-NEXT: csel x9, x22, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp d8, d8
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csel x19, xzr, x8, vs
+; CHECK-NEXT: csel x20, xzr, x9, vs
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: bl __fixdfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: ldp x20, x19, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp d0, d9
+; CHECK-NEXT: csel x8, x21, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp d0, d10
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: csel x8, x22, x8, gt
+; CHECK-NEXT: fcmp d0, d0
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: ldp x22, x21, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #40] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #24] // 16-byte Folded Reload
+; CHECK-NEXT: ldr d10, [sp, #16] // 8-byte Folded Reload
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #80 // =80
+; CHECK-NEXT: ret
+ %x = call <2 x i128> @llvm.fptosi.sat.v2f64.v2i128(<2 x double> %f)
+ ret <2 x i128> %x
+}
+
+;
+; 4-Vector half to signed integer -- result size variation
+;
+
+declare <4 x i1> @llvm.fptosi.sat.v4f16.v4i1 (<4 x half>)
+declare <4 x i8> @llvm.fptosi.sat.v4f16.v4i8 (<4 x half>)
+declare <4 x i13> @llvm.fptosi.sat.v4f16.v4i13 (<4 x half>)
+declare <4 x i16> @llvm.fptosi.sat.v4f16.v4i16 (<4 x half>)
+declare <4 x i19> @llvm.fptosi.sat.v4f16.v4i19 (<4 x half>)
+declare <4 x i50> @llvm.fptosi.sat.v4f16.v4i50 (<4 x half>)
+declare <4 x i64> @llvm.fptosi.sat.v4f16.v4i64 (<4 x half>)
+declare <4 x i100> @llvm.fptosi.sat.v4f16.v4i100(<4 x half>)
+declare <4 x i128> @llvm.fptosi.sat.v4f16.v4i128(<4 x half>)
+
+define <4 x i1> @test_signed_v4f16_v4i1(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i1:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fmov s2, #-1.00000000
+; CHECK-NEXT: fcvt s4, h0
+; CHECK-NEXT: fmov s3, wzr
+; CHECK-NEXT: fmaxnm s5, s4, s2
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fminnm s5, s5, s3
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov h5, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvt s6, h0
+; CHECK-NEXT: fmaxnm s0, s1, s2
+; CHECK-NEXT: fminnm s0, s0, s3
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w9, s0
+; CHECK-NEXT: fmaxnm s0, s5, s2
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp s4, s4
+; CHECK-NEXT: fmaxnm s1, s6, s2
+; CHECK-NEXT: fminnm s2, s0, s3
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fminnm s1, s1, s3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v0.h[1], w9
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s6, s6
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, vs
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i1> @llvm.fptosi.sat.v4f16.v4i1(<4 x half> %f)
+ ret <4 x i1> %x
+}
+
+define <4 x i8> @test_signed_v4f16_v4i8(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i8:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-1023410176
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov w9, #1123942400
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fmaxnm s5, s2, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fminnm s5, s5, s4
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov h5, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvt s6, h0
+; CHECK-NEXT: fmaxnm s0, s1, s3
+; CHECK-NEXT: fminnm s0, s0, s4
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w9, s0
+; CHECK-NEXT: fmaxnm s0, s5, s3
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fmaxnm s1, s6, s3
+; CHECK-NEXT: fminnm s3, s0, s4
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v0.h[1], w9
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s6, s6
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, vs
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i8> @llvm.fptosi.sat.v4f16.v4i8(<4 x half> %f)
+ ret <4 x i8> %x
+}
+
+define <4 x i13> @test_signed_v4f16_v4i13(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i13:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-981467136
+; CHECK-NEXT: mov w9, #61440
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: movk w9, #17791, lsl #16
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fmaxnm s5, s2, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fminnm s5, s5, s4
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov h5, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvt s6, h0
+; CHECK-NEXT: fmaxnm s0, s1, s3
+; CHECK-NEXT: fminnm s0, s0, s4
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w9, s0
+; CHECK-NEXT: fmaxnm s0, s5, s3
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fmaxnm s1, s6, s3
+; CHECK-NEXT: fminnm s3, s0, s4
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v0.h[1], w9
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s6, s6
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, vs
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i13> @llvm.fptosi.sat.v4f16.v4i13(<4 x half> %f)
+ ret <4 x i13> %x
+}
+
+define <4 x i16> @test_signed_v4f16_v4i16(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-956301312
+; CHECK-NEXT: mov w9, #65024
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: movk w9, #18175, lsl #16
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fmaxnm s5, s2, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fminnm s5, s5, s4
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov h5, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvt s6, h0
+; CHECK-NEXT: fmaxnm s0, s1, s3
+; CHECK-NEXT: fminnm s0, s0, s4
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w9, s0
+; CHECK-NEXT: fmaxnm s0, s5, s3
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fmaxnm s1, s6, s3
+; CHECK-NEXT: fminnm s3, s0, s4
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v0.h[1], w9
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s6, s6
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, vs
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i16> @llvm.fptosi.sat.v4f16.v4i16(<4 x half> %f)
+ ret <4 x i16> %x
+}
+
+define <4 x i19> @test_signed_v4f16_v4i19(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i19:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-931135488
+; CHECK-NEXT: mov w9, #65472
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: movk w9, #18559, lsl #16
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmov s4, w9
+; CHECK-NEXT: fmaxnm s5, s2, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fminnm s5, s5, s4
+; CHECK-NEXT: fcvtzs w8, s5
+; CHECK-NEXT: mov h5, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvt s6, h0
+; CHECK-NEXT: fmaxnm s0, s1, s3
+; CHECK-NEXT: fminnm s0, s0, s4
+; CHECK-NEXT: fcvt s5, h5
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w9, s0
+; CHECK-NEXT: fmaxnm s0, s5, s3
+; CHECK-NEXT: csel w9, wzr, w9, vs
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: fmaxnm s1, s6, s3
+; CHECK-NEXT: fminnm s3, s0, s4
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fminnm s1, s1, s4
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzs w8, s3
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: mov v0.s[1], w9
+; CHECK-NEXT: fcvtzs w9, s1
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s6, s6
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i19> @llvm.fptosi.sat.v4f16.v4i19(<4 x half> %f)
+ ret <4 x i19> %x
+}
+
+define <4 x i32> @test_signed_v4f16_v4i32_duplicate(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i32_duplicate:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #-822083584
+; CHECK-NEXT: mov w10, #1325400063
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: mov w9, #-2147483648
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs w8, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov w11, #2147483647
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs w10, s2
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s0, w10
+; CHECK-NEXT: fcvtzs w10, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel w10, w9, w10, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: csel w10, w11, w10, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzs w8, s2
+; CHECK-NEXT: csel w10, wzr, w10, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: csel w8, w9, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: csel w8, w11, w8, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: mov v0.s[2], w10
+; CHECK-NEXT: csel w8, wzr, w8, vs
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptosi.sat.v4f16.v4i32(<4 x half> %f)
+ ret <4 x i32> %x
+}
+
+define <4 x i50> @test_signed_v4f16_v4i50(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i50:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #-671088640
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w10, #1476395007
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: mov x9, #-562949953421312
+; CHECK-NEXT: fcvtzs x12, s1
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fmov s3, w10
+; CHECK-NEXT: mov x11, #562949953421311
+; CHECK-NEXT: csel x8, x9, x12, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs x10, s1
+; CHECK-NEXT: csel x0, xzr, x8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: csel x8, x9, x10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzs x10, s1
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: csel x8, x9, x10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs x12, s0
+; CHECK-NEXT: csel x2, xzr, x8, vs
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel x8, x9, x12, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s0, s0
+; CHECK-NEXT: csel x3, xzr, x8, vs
+; CHECK-NEXT: ret
+ %x = call <4 x i50> @llvm.fptosi.sat.v4f16.v4i50(<4 x half> %f)
+ ret <4 x i50> %x
+}
+
+define <4 x i64> @test_signed_v4f16_v4i64(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #-553648128
+; CHECK-NEXT: mov w10, #1593835519
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: mov x9, #-9223372036854775808
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: fcvtzs x8, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov x11, #9223372036854775807
+; CHECK-NEXT: csel x8, x9, x8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel x8, x11, x8, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fcvtzs x10, s2
+; CHECK-NEXT: csel x8, xzr, x8, vs
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h1, v0.h[3]
+; CHECK-NEXT: csel x10, x9, x10, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel x10, x11, x10, gt
+; CHECK-NEXT: fcmp s2, s2
+; CHECK-NEXT: csel x10, xzr, x10, vs
+; CHECK-NEXT: fcvtzs x12, s1
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h0, v0.h[2]
+; CHECK-NEXT: csel x12, x9, x12, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s5, h0
+; CHECK-NEXT: csel x12, x11, x12, gt
+; CHECK-NEXT: fcmp s1, s1
+; CHECK-NEXT: fmov d0, x10
+; CHECK-NEXT: fcvtzs x10, s5
+; CHECK-NEXT: csel x12, xzr, x12, vs
+; CHECK-NEXT: fcmp s5, s3
+; CHECK-NEXT: csel x9, x9, x10, lt
+; CHECK-NEXT: fcmp s5, s4
+; CHECK-NEXT: csel x9, x11, x9, gt
+; CHECK-NEXT: fcmp s5, s5
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: fmov d1, x9
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: mov v1.d[1], x12
+; CHECK-NEXT: ret
+ %x = call <4 x i64> @llvm.fptosi.sat.v4f16.v4i64(<4 x half> %f)
+ ret <4 x i64> %x
+}
+
+define <4 x i100> @test_signed_v4f16_v4i100(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i100:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #112 // =112
+; CHECK-NEXT: str d10, [sp, #16] // 8-byte Folded Spill
+; CHECK-NEXT: stp d9, d8, [sp, #24] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #40] // 8-byte Folded Spill
+; CHECK-NEXT: stp x26, x25, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x24, x23, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #96] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 112
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w23, -40
+; CHECK-NEXT: .cfi_offset w24, -48
+; CHECK-NEXT: .cfi_offset w25, -56
+; CHECK-NEXT: .cfi_offset w26, -64
+; CHECK-NEXT: .cfi_offset w30, -72
+; CHECK-NEXT: .cfi_offset b8, -80
+; CHECK-NEXT: .cfi_offset b9, -88
+; CHECK-NEXT: .cfi_offset b10, -96
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvt s8, h1
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w8, #-251658240
+; CHECK-NEXT: mov w9, #1895825407
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov x25, #-34359738368
+; CHECK-NEXT: fmov s10, w9
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: mov x26, #34359738367
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x25, x1, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: mov h0, v0.h[2]
+; CHECK-NEXT: csel x9, x26, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x19, xzr, x8, vs
+; CHECK-NEXT: csel x20, xzr, x9, vs
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x25, x1, lt
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csel x9, x26, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x21, xzr, x8, vs
+; CHECK-NEXT: csel x22, xzr, x9, vs
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x25, x1, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csel x9, x26, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x23, xzr, x8, vs
+; CHECK-NEXT: csel x24, xzr, x9, vs
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csel x8, x25, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: csel x8, x26, x8, gt
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: mov x4, x21
+; CHECK-NEXT: mov x5, x22
+; CHECK-NEXT: mov x6, x23
+; CHECK-NEXT: mov x7, x24
+; CHECK-NEXT: ldp x20, x19, [sp, #96] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x24, x23, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x26, x25, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #40] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #24] // 16-byte Folded Reload
+; CHECK-NEXT: ldr d10, [sp, #16] // 8-byte Folded Reload
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #112 // =112
+; CHECK-NEXT: ret
+ %x = call <4 x i100> @llvm.fptosi.sat.v4f16.v4i100(<4 x half> %f)
+ ret <4 x i100> %x
+}
+
+define <4 x i128> @test_signed_v4f16_v4i128(<4 x half> %f) {
+; CHECK-LABEL: test_signed_v4f16_v4i128:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #112 // =112
+; CHECK-NEXT: str d10, [sp, #16] // 8-byte Folded Spill
+; CHECK-NEXT: stp d9, d8, [sp, #24] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #40] // 8-byte Folded Spill
+; CHECK-NEXT: stp x26, x25, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x24, x23, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #96] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 112
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w23, -40
+; CHECK-NEXT: .cfi_offset w24, -48
+; CHECK-NEXT: .cfi_offset w25, -56
+; CHECK-NEXT: .cfi_offset w26, -64
+; CHECK-NEXT: .cfi_offset w30, -72
+; CHECK-NEXT: .cfi_offset b8, -80
+; CHECK-NEXT: .cfi_offset b9, -88
+; CHECK-NEXT: .cfi_offset b10, -96
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvt s8, h1
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w8, #-16777216
+; CHECK-NEXT: mov w9, #2130706431
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov x25, #-9223372036854775808
+; CHECK-NEXT: fmov s10, w9
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: mov x26, #9223372036854775807
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x25, x1, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: mov h0, v0.h[2]
+; CHECK-NEXT: csel x9, x26, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x19, xzr, x8, vs
+; CHECK-NEXT: csel x20, xzr, x9, vs
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x25, x1, lt
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csel x9, x26, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x21, xzr, x8, vs
+; CHECK-NEXT: csel x22, xzr, x9, vs
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, x25, x1, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csel x9, x26, x9, gt
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x23, xzr, x8, vs
+; CHECK-NEXT: csel x24, xzr, x9, vs
+; CHECK-NEXT: bl __fixsfti
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csel x8, x25, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s10
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: csel x8, x26, x8, gt
+; CHECK-NEXT: fcmp s8, s8
+; CHECK-NEXT: csel x9, xzr, x9, vs
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: mov x4, x21
+; CHECK-NEXT: mov x5, x22
+; CHECK-NEXT: mov x6, x23
+; CHECK-NEXT: mov x7, x24
+; CHECK-NEXT: ldp x20, x19, [sp, #96] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x24, x23, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x26, x25, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #40] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #24] // 16-byte Folded Reload
+; CHECK-NEXT: ldr d10, [sp, #16] // 8-byte Folded Reload
+; CHECK-NEXT: csel x1, xzr, x8, vs
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #112 // =112
+; CHECK-NEXT: ret
+ %x = call <4 x i128> @llvm.fptosi.sat.v4f16.v4i128(<4 x half> %f)
+ ret <4 x i128> %x
+}
+
diff --git a/llvm/test/CodeGen/AArch64/fptoui-sat-scalar.ll b/llvm/test/CodeGen/AArch64/fptoui-sat-scalar.ll
new file mode 100644
index 000000000000..ef29e7890357
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/fptoui-sat-scalar.ll
@@ -0,0 +1,549 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=aarch64 < %s | FileCheck %s
+
+;
+; 32-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f32 (float)
+declare i8 @llvm.fptoui.sat.i8.f32 (float)
+declare i13 @llvm.fptoui.sat.i13.f32 (float)
+declare i16 @llvm.fptoui.sat.i16.f32 (float)
+declare i19 @llvm.fptoui.sat.i19.f32 (float)
+declare i32 @llvm.fptoui.sat.i32.f32 (float)
+declare i50 @llvm.fptoui.sat.i50.f32 (float)
+declare i64 @llvm.fptoui.sat.i64.f32 (float)
+declare i100 @llvm.fptoui.sat.i100.f32(float)
+declare i128 @llvm.fptoui.sat.i128.f32(float)
+
+define i1 @test_unsigned_i1_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i1_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, #1.00000000
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: and w0, w8, #0x1
+; CHECK-NEXT: ret
+ %x = call i1 @llvm.fptoui.sat.i1.f32(float %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i8_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: mov w8, #1132396544
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i8 @llvm.fptoui.sat.i8.f32(float %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i13_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #63488
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #17919, lsl #16
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i13 @llvm.fptoui.sat.i13.f32(float %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i16_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #65280
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #18303, lsl #16
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i16 @llvm.fptoui.sat.i16.f32(float %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i19_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #65504
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #18687, lsl #16
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i19 @llvm.fptoui.sat.i19.f32(float %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i32_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #1333788671
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csinv w0, w8, wzr, le
+; CHECK-NEXT: ret
+ %x = call i32 @llvm.fptoui.sat.i32.f32(float %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i50_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #1484783615
+; CHECK-NEXT: fcvtzu x8, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: mov x9, #1125899906842623
+; CHECK-NEXT: csel x0, x9, x8, gt
+; CHECK-NEXT: ret
+ %x = call i50 @llvm.fptoui.sat.i50.f32(float %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i64_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #1602224127
+; CHECK-NEXT: fcvtzu x8, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csinv x0, x8, xzr, le
+; CHECK-NEXT: ret
+ %x = call i64 @llvm.fptoui.sat.i64.f32(float %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i100_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: mov w8, #1904214015
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov x9, #68719476735
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: csel x11, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csel x1, x9, x11, gt
+; CHECK-NEXT: csinv x0, x10, xzr, le
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i100 @llvm.fptoui.sat.i100.f32(float %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f32(float %f) nounwind {
+; CHECK-LABEL: test_unsigned_i128_f32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: mov w8, #2139095039
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csinv x0, x10, xzr, le
+; CHECK-NEXT: csinv x1, x9, xzr, le
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i128 @llvm.fptoui.sat.i128.f32(float %f)
+ ret i128 %x
+}
+
+;
+; 64-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f64 (double)
+declare i8 @llvm.fptoui.sat.i8.f64 (double)
+declare i13 @llvm.fptoui.sat.i13.f64 (double)
+declare i16 @llvm.fptoui.sat.i16.f64 (double)
+declare i19 @llvm.fptoui.sat.i19.f64 (double)
+declare i32 @llvm.fptoui.sat.i32.f64 (double)
+declare i50 @llvm.fptoui.sat.i50.f64 (double)
+declare i64 @llvm.fptoui.sat.i64.f64 (double)
+declare i100 @llvm.fptoui.sat.i100.f64(double)
+declare i128 @llvm.fptoui.sat.i128.f64(double)
+
+define i1 @test_unsigned_i1_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i1_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, #1.00000000
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: and w0, w8, #0x1
+; CHECK-NEXT: ret
+ %x = call i1 @llvm.fptoui.sat.i1.f64(double %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i8_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #246290604621824
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16495, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: ret
+ %x = call i8 @llvm.fptoui.sat.i8.f64(double %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i13_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #280375465082880
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16575, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: ret
+ %x = call i13 @llvm.fptoui.sat.i13.f64(double %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i16_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281337537757184
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16623, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: ret
+ %x = call i16 @llvm.fptoui.sat.i16.f64(double %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i19_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281457796841472
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16671, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: ret
+ %x = call i19 @llvm.fptoui.sat.i19.f64(double %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i32_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: ret
+ %x = call i32 @llvm.fptoui.sat.i32.f64(double %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i50_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-8
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #17167, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu x0, d0
+; CHECK-NEXT: ret
+ %x = call i50 @llvm.fptoui.sat.i50.f64(double %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i64_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x9, #4895412794951729151
+; CHECK-NEXT: fcvtzu x8, d0
+; CHECK-NEXT: fcmp d0, #0.0
+; CHECK-NEXT: fmov d1, x9
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp d0, d1
+; CHECK-NEXT: csinv x0, x8, xzr, le
+; CHECK-NEXT: ret
+ %x = call i64 @llvm.fptoui.sat.i64.f64(double %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i100_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixunsdfti
+; CHECK-NEXT: mov x8, #5057542381537067007
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: fcmp d8, #0.0
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov x9, #68719476735
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: csel x11, xzr, x1, lt
+; CHECK-NEXT: fcmp d8, d0
+; CHECK-NEXT: csel x1, x9, x11, gt
+; CHECK-NEXT: csinv x0, x10, xzr, le
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i100 @llvm.fptoui.sat.i100.f64(double %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f64(double %f) nounwind {
+; CHECK-LABEL: test_unsigned_i128_f64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: mov v8.16b, v0.16b
+; CHECK-NEXT: bl __fixunsdfti
+; CHECK-NEXT: mov x8, #5183643171103440895
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: fcmp d8, #0.0
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp d8, d0
+; CHECK-NEXT: csinv x0, x10, xzr, le
+; CHECK-NEXT: csinv x1, x9, xzr, le
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i128 @llvm.fptoui.sat.i128.f64(double %f)
+ ret i128 %x
+}
+
+;
+; 16-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f16 (half)
+declare i8 @llvm.fptoui.sat.i8.f16 (half)
+declare i13 @llvm.fptoui.sat.i13.f16 (half)
+declare i16 @llvm.fptoui.sat.i16.f16 (half)
+declare i19 @llvm.fptoui.sat.i19.f16 (half)
+declare i32 @llvm.fptoui.sat.i32.f16 (half)
+declare i50 @llvm.fptoui.sat.i50.f16 (half)
+declare i64 @llvm.fptoui.sat.i64.f16 (half)
+declare i100 @llvm.fptoui.sat.i100.f16(half)
+declare i128 @llvm.fptoui.sat.i128.f16(half)
+
+define i1 @test_unsigned_i1_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i1_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, #1.00000000
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: and w0, w8, #0x1
+; CHECK-NEXT: ret
+ %x = call i1 @llvm.fptoui.sat.i1.f16(half %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i8_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: mov w8, #1132396544
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i8 @llvm.fptoui.sat.i8.f16(half %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i13_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #63488
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #17919, lsl #16
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i13 @llvm.fptoui.sat.i13.f16(half %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i16_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #65280
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #18303, lsl #16
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i16 @llvm.fptoui.sat.i16.f16(half %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i19_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #65504
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #18687, lsl #16
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fminnm s0, s0, s1
+; CHECK-NEXT: fcvtzu w0, s0
+; CHECK-NEXT: ret
+ %x = call i19 @llvm.fptoui.sat.i19.f16(half %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i32_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csinv w0, w8, wzr, le
+; CHECK-NEXT: ret
+ %x = call i32 @llvm.fptoui.sat.i32.f16(half %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i50_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: mov w8, #1484783615
+; CHECK-NEXT: fcvtzu x9, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: csel x8, xzr, x9, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: mov x9, #1125899906842623
+; CHECK-NEXT: csel x0, x9, x8, gt
+; CHECK-NEXT: ret
+ %x = call i50 @llvm.fptoui.sat.i50.f16(half %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i64_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: mov w8, #1602224127
+; CHECK-NEXT: fcvtzu x9, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: csel x8, xzr, x9, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csinv x0, x8, xzr, le
+; CHECK-NEXT: ret
+ %x = call i64 @llvm.fptoui.sat.i64.f16(half %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i100_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: mov w8, #1904214015
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov x9, #68719476735
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: csel x11, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csel x1, x9, x11, gt
+; CHECK-NEXT: csinv x0, x10, xzr, le
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i100 @llvm.fptoui.sat.i100.f16(half %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f16(half %f) nounwind {
+; CHECK-LABEL: test_unsigned_i128_f16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: str d8, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: str x30, [sp, #8] // 8-byte Folded Spill
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: mov w8, #2139095039
+; CHECK-NEXT: ldr x30, [sp, #8] // 8-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s0
+; CHECK-NEXT: csinv x0, x10, xzr, le
+; CHECK-NEXT: csinv x1, x9, xzr, le
+; CHECK-NEXT: ldr d8, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+ %x = call i128 @llvm.fptoui.sat.i128.f16(half %f)
+ ret i128 %x
+}
diff --git a/llvm/test/CodeGen/AArch64/fptoui-sat-vector.ll b/llvm/test/CodeGen/AArch64/fptoui-sat-vector.ll
new file mode 100644
index 000000000000..89233dedb054
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/fptoui-sat-vector.ll
@@ -0,0 +1,2196 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=aarch64 < %s | FileCheck %s
+
+;
+; Float to unsigned 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptoui.sat.v1f32.v1i32 (<1 x float>)
+declare <2 x i32> @llvm.fptoui.sat.v2f32.v2i32 (<2 x float>)
+declare <3 x i32> @llvm.fptoui.sat.v3f32.v3i32 (<3 x float>)
+declare <4 x i32> @llvm.fptoui.sat.v4f32.v4i32 (<4 x float>)
+declare <5 x i32> @llvm.fptoui.sat.v5f32.v5i32 (<5 x float>)
+declare <6 x i32> @llvm.fptoui.sat.v6f32.v6i32 (<6 x float>)
+declare <7 x i32> @llvm.fptoui.sat.v7f32.v7i32 (<7 x float>)
+declare <8 x i32> @llvm.fptoui.sat.v8f32.v8i32 (<8 x float>)
+
+define <1 x i32> @test_unsigned_v1f32_v1i32(<1 x float> %f) {
+; CHECK-LABEL: test_unsigned_v1f32_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptoui.sat.v1f32.v1i32(<1 x float> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_unsigned_v2f32_v2i32(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptoui.sat.v2f32.v2i32(<2 x float> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_unsigned_v3f32_v3i32(<3 x float> %f) {
+; CHECK-LABEL: test_unsigned_v3f32_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: mov s2, v0.s[2]
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: mov s1, v0.s[3]
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptoui.sat.v3f32.v3i32(<3 x float> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_unsigned_v4f32_v4i32(<4 x float> %f) {
+; CHECK-LABEL: test_unsigned_v4f32_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: mov s2, v0.s[2]
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: mov s1, v0.s[3]
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptoui.sat.v4f32.v4i32(<4 x float> %f)
+ ret <4 x i32> %x
+}
+
+define <5 x i32> @test_unsigned_v5f32_v5i32(<5 x float> %f) {
+; CHECK-LABEL: test_unsigned_v5f32_v5i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #1333788671
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s5, w9
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s0, s5
+; CHECK-NEXT: fcvtzu w10, s1
+; CHECK-NEXT: csinv w0, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w10, lt
+; CHECK-NEXT: fcmp s1, s5
+; CHECK-NEXT: fcvtzu w11, s2
+; CHECK-NEXT: csinv w1, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w11, lt
+; CHECK-NEXT: fcmp s2, s5
+; CHECK-NEXT: fcvtzu w12, s3
+; CHECK-NEXT: csinv w2, w8, wzr, le
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: csel w8, wzr, w12, lt
+; CHECK-NEXT: fcmp s3, s5
+; CHECK-NEXT: fcvtzu w9, s4
+; CHECK-NEXT: csinv w3, w8, wzr, le
+; CHECK-NEXT: fcmp s4, #0.0
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s4, s5
+; CHECK-NEXT: csinv w4, w8, wzr, le
+; CHECK-NEXT: ret
+ %x = call <5 x i32> @llvm.fptoui.sat.v5f32.v5i32(<5 x float> %f)
+ ret <5 x i32> %x
+}
+
+define <6 x i32> @test_unsigned_v6f32_v6i32(<6 x float> %f) {
+; CHECK-LABEL: test_unsigned_v6f32_v6i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #1333788671
+; CHECK-NEXT: fcvtzu w8, s5
+; CHECK-NEXT: fcmp s5, #0.0
+; CHECK-NEXT: fmov s6, w9
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s5, s6
+; CHECK-NEXT: fcvtzu w10, s4
+; CHECK-NEXT: csinv w5, w8, wzr, le
+; CHECK-NEXT: fcmp s4, #0.0
+; CHECK-NEXT: csel w8, wzr, w10, lt
+; CHECK-NEXT: fcmp s4, s6
+; CHECK-NEXT: fcvtzu w11, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s4, w8
+; CHECK-NEXT: csel w8, wzr, w11, lt
+; CHECK-NEXT: fcmp s0, s6
+; CHECK-NEXT: fcvtzu w12, s1
+; CHECK-NEXT: csinv w0, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w12, lt
+; CHECK-NEXT: fcmp s1, s6
+; CHECK-NEXT: fcvtzu w13, s2
+; CHECK-NEXT: csinv w1, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w13, lt
+; CHECK-NEXT: fcmp s2, s6
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: csinv w2, w8, wzr, le
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: mov v4.s[1], w5
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s6
+; CHECK-NEXT: csinv w3, w8, wzr, le
+; CHECK-NEXT: fmov w4, s4
+; CHECK-NEXT: ret
+ %x = call <6 x i32> @llvm.fptoui.sat.v6f32.v6i32(<6 x float> %f)
+ ret <6 x i32> %x
+}
+
+define <7 x i32> @test_unsigned_v7f32_v7i32(<7 x float> %f) {
+; CHECK-LABEL: test_unsigned_v7f32_v7i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w9, #1333788671
+; CHECK-NEXT: fcvtzu w8, s5
+; CHECK-NEXT: fcmp s5, #0.0
+; CHECK-NEXT: fmov s7, w9
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s5, s7
+; CHECK-NEXT: fcvtzu w10, s4
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s4, #0.0
+; CHECK-NEXT: csel w10, wzr, w10, lt
+; CHECK-NEXT: fcmp s4, s7
+; CHECK-NEXT: fcvtzu w11, s6
+; CHECK-NEXT: csinv w10, w10, wzr, le
+; CHECK-NEXT: fcmp s6, #0.0
+; CHECK-NEXT: fmov s4, w10
+; CHECK-NEXT: csel w10, wzr, w11, lt
+; CHECK-NEXT: fcmp s6, s7
+; CHECK-NEXT: fcvtzu w12, s0
+; CHECK-NEXT: csinv w6, w10, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: mov v4.s[1], w8
+; CHECK-NEXT: csel w8, wzr, w12, lt
+; CHECK-NEXT: fcmp s0, s7
+; CHECK-NEXT: fcvtzu w13, s1
+; CHECK-NEXT: csinv w0, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w13, lt
+; CHECK-NEXT: fcmp s1, s7
+; CHECK-NEXT: fcvtzu w14, s2
+; CHECK-NEXT: csinv w1, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w14, lt
+; CHECK-NEXT: fcmp s2, s7
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: csinv w2, w8, wzr, le
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: mov v4.s[2], w6
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s7
+; CHECK-NEXT: csinv w3, w8, wzr, le
+; CHECK-NEXT: mov w5, v4.s[1]
+; CHECK-NEXT: fmov w4, s4
+; CHECK-NEXT: ret
+ %x = call <7 x i32> @llvm.fptoui.sat.v7f32.v7i32(<7 x float> %f)
+ ret <7 x i32> %x
+}
+
+define <8 x i32> @test_unsigned_v8f32_v8i32(<8 x float> %f) {
+; CHECK-LABEL: test_unsigned_v8f32_v8i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov s2, v0.s[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fmov s4, w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s4
+; CHECK-NEXT: mov s3, v0.s[2]
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: mov s2, v0.s[3]
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov s3, v1.s[1]
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: mov v0.s[3], w9
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: mov s2, v1.s[2]
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov s3, v1.s[3]
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov v1.s[1], w9
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: mov v1.s[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v1.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <8 x i32> @llvm.fptoui.sat.v8f32.v8i32(<8 x float> %f)
+ ret <8 x i32> %x
+}
+
+;
+; Double to unsigned 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptoui.sat.v1f64.v1i32 (<1 x double>)
+declare <2 x i32> @llvm.fptoui.sat.v2f64.v2i32 (<2 x double>)
+declare <3 x i32> @llvm.fptoui.sat.v3f64.v3i32 (<3 x double>)
+declare <4 x i32> @llvm.fptoui.sat.v4f64.v4i32 (<4 x double>)
+declare <5 x i32> @llvm.fptoui.sat.v5f64.v5i32 (<5 x double>)
+declare <6 x i32> @llvm.fptoui.sat.v6f64.v6i32 (<6 x double>)
+
+define <1 x i32> @test_unsigned_v1f64_v1i32(<1 x double> %f) {
+; CHECK-LABEL: test_unsigned_v1f64_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d1, x8
+; CHECK-NEXT: fminnm d0, d0, d1
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptoui.sat.v1f64.v1i32(<1 x double> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_unsigned_v2f64_v2i32(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptoui.sat.v2f64.v2i32(<2 x double> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_unsigned_v3f64_v3i32(<3 x double> %f) {
+; CHECK-LABEL: test_unsigned_v3f64_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d3, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d3
+; CHECK-NEXT: fmov d4, x8
+; CHECK-NEXT: fmaxnm d1, d1, d3
+; CHECK-NEXT: fmaxnm d2, d2, d3
+; CHECK-NEXT: fmaxnm d3, d3, d0
+; CHECK-NEXT: fminnm d0, d0, d4
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d2, d2, d4
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: fminnm d3, d3, d4
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, d2
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: fcvtzu w8, d3
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptoui.sat.v3f64.v3i32(<3 x double> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_unsigned_v4f64_v4i32(<4 x double> %f) {
+; CHECK-LABEL: test_unsigned_v4f64_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d2, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: mov d3, v0.d[1]
+; CHECK-NEXT: mov d4, v1.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d2
+; CHECK-NEXT: fmaxnm d3, d3, d2
+; CHECK-NEXT: fmaxnm d1, d1, d2
+; CHECK-NEXT: fmaxnm d2, d4, d2
+; CHECK-NEXT: fmov d4, x8
+; CHECK-NEXT: fminnm d0, d0, d4
+; CHECK-NEXT: fminnm d3, d3, d4
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d4
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d3
+; CHECK-NEXT: fminnm d2, d2, d4
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: fcvtzu w8, d2
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptoui.sat.v4f64.v4i32(<4 x double> %f)
+ ret <4 x i32> %x
+}
+
+define <5 x i32> @test_unsigned_v5f64_v5i32(<5 x double> %f) {
+; CHECK-LABEL: test_unsigned_v5f64_v5i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d5, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d5
+; CHECK-NEXT: fmov d6, x8
+; CHECK-NEXT: fmaxnm d1, d1, d5
+; CHECK-NEXT: fmaxnm d2, d2, d5
+; CHECK-NEXT: fmaxnm d3, d3, d5
+; CHECK-NEXT: fmaxnm d4, d4, d5
+; CHECK-NEXT: fminnm d0, d0, d6
+; CHECK-NEXT: fminnm d1, d1, d6
+; CHECK-NEXT: fminnm d2, d2, d6
+; CHECK-NEXT: fminnm d3, d3, d6
+; CHECK-NEXT: fminnm d4, d4, d6
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: fcvtzu w1, d1
+; CHECK-NEXT: fcvtzu w2, d2
+; CHECK-NEXT: fcvtzu w3, d3
+; CHECK-NEXT: fcvtzu w4, d4
+; CHECK-NEXT: ret
+ %x = call <5 x i32> @llvm.fptoui.sat.v5f64.v5i32(<5 x double> %f)
+ ret <5 x i32> %x
+}
+
+define <6 x i32> @test_unsigned_v6f64_v6i32(<6 x double> %f) {
+; CHECK-LABEL: test_unsigned_v6f64_v6i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d6, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: fmaxnm d0, d0, d6
+; CHECK-NEXT: fmov d7, x8
+; CHECK-NEXT: fmaxnm d1, d1, d6
+; CHECK-NEXT: fmaxnm d2, d2, d6
+; CHECK-NEXT: fmaxnm d3, d3, d6
+; CHECK-NEXT: fmaxnm d4, d4, d6
+; CHECK-NEXT: fmaxnm d5, d5, d6
+; CHECK-NEXT: fminnm d0, d0, d7
+; CHECK-NEXT: fminnm d1, d1, d7
+; CHECK-NEXT: fminnm d2, d2, d7
+; CHECK-NEXT: fminnm d3, d3, d7
+; CHECK-NEXT: fminnm d4, d4, d7
+; CHECK-NEXT: fminnm d5, d5, d7
+; CHECK-NEXT: fcvtzu w0, d0
+; CHECK-NEXT: fcvtzu w1, d1
+; CHECK-NEXT: fcvtzu w2, d2
+; CHECK-NEXT: fcvtzu w3, d3
+; CHECK-NEXT: fcvtzu w4, d4
+; CHECK-NEXT: fcvtzu w5, d5
+; CHECK-NEXT: ret
+ %x = call <6 x i32> @llvm.fptoui.sat.v6f64.v6i32(<6 x double> %f)
+ ret <6 x i32> %x
+}
+
+;
+; FP128 to unsigned 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptoui.sat.v1f128.v1i32 (<1 x fp128>)
+declare <2 x i32> @llvm.fptoui.sat.v2f128.v2i32 (<2 x fp128>)
+declare <3 x i32> @llvm.fptoui.sat.v3f128.v3i32 (<3 x fp128>)
+declare <4 x i32> @llvm.fptoui.sat.v4f128.v4i32 (<4 x fp128>)
+
+define <1 x i32> @test_unsigned_v1f128_v1i32(<1 x fp128> %f) {
+; CHECK-LABEL: test_unsigned_v1f128_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #32 // =32
+; CHECK-NEXT: stp x30, x19, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 32
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w30, -16
+; CHECK-NEXT: adrp x8, .LCPI14_0
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI14_0]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: adrp x8, .LCPI14_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI14_1]
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: ldp x30, x19, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: add sp, sp, #32 // =32
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptoui.sat.v1f128.v1i32(<1 x fp128> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_unsigned_v2f128_v2i32(<2 x fp128> %f) {
+; CHECK-LABEL: test_unsigned_v2f128_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #96 // =96
+; CHECK-NEXT: str x30, [sp, #64] // 8-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 96
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: adrp x8, .LCPI15_0
+; CHECK-NEXT: mov v2.16b, v1.16b
+; CHECK-NEXT: stp q1, q0, [sp, #32] // 32-byte Folded Spill
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI15_0]
+; CHECK-NEXT: mov v0.16b, v2.16b
+; CHECK-NEXT: str q1, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: adrp x8, .LCPI15_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI15_1]
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: str q1, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w20, w19, wzr, le
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w20
+; CHECK-NEXT: ldp x20, x19, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #64] // 8-byte Folded Reload
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: add sp, sp, #96 // =96
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptoui.sat.v2f128.v2i32(<2 x fp128> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_unsigned_v3f128_v3i32(<3 x fp128> %f) {
+; CHECK-LABEL: test_unsigned_v3f128_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #112 // =112
+; CHECK-NEXT: str x30, [sp, #80] // 8-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #96] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 112
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: adrp x8, .LCPI16_0
+; CHECK-NEXT: stp q0, q2, [sp, #48] // 32-byte Folded Spill
+; CHECK-NEXT: mov v2.16b, v1.16b
+; CHECK-NEXT: str q1, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI16_0]
+; CHECK-NEXT: mov v0.16b, v2.16b
+; CHECK-NEXT: str q1, [sp, #32] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: adrp x8, .LCPI16_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI16_1]
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: str q1, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldp q1, q0, [sp, #32] // 32-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w20, w19, wzr, le
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w20
+; CHECK-NEXT: str q0, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: ldr q0, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: ldr q0, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: ldp x20, x19, [sp, #96] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #80] // 8-byte Folded Reload
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: add sp, sp, #112 // =112
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptoui.sat.v3f128.v3i32(<3 x fp128> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_unsigned_v4f128_v4i32(<4 x fp128> %f) {
+; CHECK-LABEL: test_unsigned_v4f128_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #128 // =128
+; CHECK-NEXT: str x30, [sp, #96] // 8-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #112] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 128
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: adrp x8, .LCPI17_0
+; CHECK-NEXT: stp q0, q2, [sp, #16] // 32-byte Folded Spill
+; CHECK-NEXT: mov v2.16b, v1.16b
+; CHECK-NEXT: str q1, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI17_0]
+; CHECK-NEXT: mov v0.16b, v2.16b
+; CHECK-NEXT: str q3, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: str q1, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: adrp x8, .LCPI17_1
+; CHECK-NEXT: ldr q1, [x8, :lo12:.LCPI17_1]
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: str q1, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w20, w19, wzr, le
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: ldr q0, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w20
+; CHECK-NEXT: str q0, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: ldp q0, q1, [sp, #32] // 32-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: ldr q0, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: str q0, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: ldp q1, q0, [sp, #64] // 32-byte Folded Reload
+; CHECK-NEXT: bl __getf2
+; CHECK-NEXT: ldr q0, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: mov w19, w0
+; CHECK-NEXT: bl __fixunstfsi
+; CHECK-NEXT: ldr q0, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldr q1, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: cmp w19, #0 // =0
+; CHECK-NEXT: csel w19, wzr, w0, lt
+; CHECK-NEXT: bl __gttf2
+; CHECK-NEXT: cmp w0, #0 // =0
+; CHECK-NEXT: ldr q0, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: csinv w8, w19, wzr, le
+; CHECK-NEXT: ldp x20, x19, [sp, #112] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #96] // 8-byte Folded Reload
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: add sp, sp, #128 // =128
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptoui.sat.v4f128.v4i32(<4 x fp128> %f)
+ ret <4 x i32> %x
+}
+
+;
+; FP16 to unsigned 32-bit -- Vector size variation
+;
+
+declare <1 x i32> @llvm.fptoui.sat.v1f16.v1i32 (<1 x half>)
+declare <2 x i32> @llvm.fptoui.sat.v2f16.v2i32 (<2 x half>)
+declare <3 x i32> @llvm.fptoui.sat.v3f16.v3i32 (<3 x half>)
+declare <4 x i32> @llvm.fptoui.sat.v4f16.v4i32 (<4 x half>)
+declare <5 x i32> @llvm.fptoui.sat.v5f16.v5i32 (<5 x half>)
+declare <6 x i32> @llvm.fptoui.sat.v6f16.v6i32 (<6 x half>)
+declare <7 x i32> @llvm.fptoui.sat.v7f16.v7i32 (<7 x half>)
+declare <8 x i32> @llvm.fptoui.sat.v8f16.v8i32 (<8 x half>)
+
+define <1 x i32> @test_unsigned_v1f16_v1i32(<1 x half> %f) {
+; CHECK-LABEL: test_unsigned_v1f16_v1i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s1
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: ret
+ %x = call <1 x i32> @llvm.fptoui.sat.v1f16.v1i32(<1 x half> %f)
+ ret <1 x i32> %x
+}
+
+define <2 x i32> @test_unsigned_v2f16_v2i32(<2 x half> %f) {
+; CHECK-LABEL: test_unsigned_v2f16_v2i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: mov v0.s[1], w9
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptoui.sat.v2f16.v2i32(<2 x half> %f)
+ ret <2 x i32> %x
+}
+
+define <3 x i32> @test_unsigned_v3f16_v3i32(<3 x half> %f) {
+; CHECK-LABEL: test_unsigned_v3f16_v3i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <3 x i32> @llvm.fptoui.sat.v3f16.v3i32(<3 x half> %f)
+ ret <3 x i32> %x
+}
+
+define <4 x i32> @test_unsigned_v4f16_v4i32(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptoui.sat.v4f16.v4i32(<4 x half> %f)
+ ret <4 x i32> %x
+}
+
+define <5 x i32> @test_unsigned_v5f16_v5i32(<5 x half> %f) {
+; CHECK-LABEL: test_unsigned_v5f16_v5i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: csinv w0, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: csinv w1, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h1, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: ext v0.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: csinv w2, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu w10, s0
+; CHECK-NEXT: csinv w3, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w8, wzr, w10, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csinv w4, w8, wzr, le
+; CHECK-NEXT: ret
+ %x = call <5 x i32> @llvm.fptoui.sat.v5f16.v5i32(<5 x half> %f)
+ ret <5 x i32> %x
+}
+
+define <6 x i32> @test_unsigned_v6f16_v6i32(<6 x half> %f) {
+; CHECK-LABEL: test_unsigned_v6f16_v6i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: mov h2, v1.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: csinv w5, w8, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: mov h2, v0.h[2]
+; CHECK-NEXT: fcvtzu w10, s1
+; CHECK-NEXT: csinv w0, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: csel w9, wzr, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvtzu w11, s2
+; CHECK-NEXT: csinv w1, w9, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: csel w8, wzr, w11, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvtzu w12, s0
+; CHECK-NEXT: csinv w2, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: mov v1.s[1], w5
+; CHECK-NEXT: csel w8, wzr, w12, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csinv w3, w8, wzr, le
+; CHECK-NEXT: fmov w4, s1
+; CHECK-NEXT: ret
+ %x = call <6 x i32> @llvm.fptoui.sat.v6f16.v6i32(<6 x half> %f)
+ ret <6 x i32> %x
+}
+
+define <7 x i32> @test_unsigned_v7f16_v7i32(<7 x half> %f) {
+; CHECK-LABEL: test_unsigned_v7f16_v7i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: mov h2, v1.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvt s2, h1
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: mov h1, v1.h[2]
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzu w10, s1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w10, wzr, w10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvtzu w11, s2
+; CHECK-NEXT: csinv w6, w10, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w10, wzr, w11, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: mov h2, v0.h[2]
+; CHECK-NEXT: fcvtzu w11, s1
+; CHECK-NEXT: csinv w0, w10, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: csel w10, wzr, w11, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvtzu w12, s2
+; CHECK-NEXT: fmov s1, w9
+; CHECK-NEXT: csinv w1, w10, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: mov v1.s[1], w8
+; CHECK-NEXT: csel w8, wzr, w12, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: fcvtzu w13, s0
+; CHECK-NEXT: csinv w2, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: mov v1.s[2], w6
+; CHECK-NEXT: csel w8, wzr, w13, lt
+; CHECK-NEXT: fcmp s0, s3
+; CHECK-NEXT: csinv w3, w8, wzr, le
+; CHECK-NEXT: mov w5, v1.s[1]
+; CHECK-NEXT: fmov w4, s1
+; CHECK-NEXT: ret
+ %x = call <7 x i32> @llvm.fptoui.sat.v7f16.v7i32(<7 x half> %f)
+ ret <7 x i32> %x
+}
+
+define <8 x i32> @test_unsigned_v8f16_v8i32(<8 x half> %f) {
+; CHECK-LABEL: test_unsigned_v8f16_v8i32:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s4, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: mov h3, v0.h[2]
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: mov h1, v0.h[3]
+; CHECK-NEXT: ext v5.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: mov h2, v5.h[1]
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: fcvt s1, h5
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: mov h2, v5.h[2]
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s4
+; CHECK-NEXT: fcvt s2, h2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov h3, v5.h[3]
+; CHECK-NEXT: fmov s1, w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s4
+; CHECK-NEXT: mov v1.s[1], w9
+; CHECK-NEXT: fcvtzu w9, s3
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s3, #0.0
+; CHECK-NEXT: mov v1.s[2], w8
+; CHECK-NEXT: csel w8, wzr, w9, lt
+; CHECK-NEXT: fcmp s3, s4
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v1.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <8 x i32> @llvm.fptoui.sat.v8f16.v8i32(<8 x half> %f)
+ ret <8 x i32> %x
+}
+
+;
+; 2-Vector float to unsigned integer -- result size variation
+;
+
+declare <2 x i1> @llvm.fptoui.sat.v2f32.v2i1 (<2 x float>)
+declare <2 x i8> @llvm.fptoui.sat.v2f32.v2i8 (<2 x float>)
+declare <2 x i13> @llvm.fptoui.sat.v2f32.v2i13 (<2 x float>)
+declare <2 x i16> @llvm.fptoui.sat.v2f32.v2i16 (<2 x float>)
+declare <2 x i19> @llvm.fptoui.sat.v2f32.v2i19 (<2 x float>)
+declare <2 x i50> @llvm.fptoui.sat.v2f32.v2i50 (<2 x float>)
+declare <2 x i64> @llvm.fptoui.sat.v2f32.v2i64 (<2 x float>)
+declare <2 x i100> @llvm.fptoui.sat.v2f32.v2i100(<2 x float>)
+declare <2 x i128> @llvm.fptoui.sat.v2f32.v2i128(<2 x float>)
+
+define <2 x i1> @test_unsigned_v2f32_v2i1(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i1:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: fmov s2, #1.00000000
+; CHECK-NEXT: mov s3, v0.s[1]
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmaxnm s1, s3, s1
+; CHECK-NEXT: fminnm s0, s0, s2
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i1> @llvm.fptoui.sat.v2f32.v2i1(<2 x float> %f)
+ ret <2 x i1> %x
+}
+
+define <2 x i8> @test_unsigned_v2f32_v2i8(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i8:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: mov w8, #1132396544
+; CHECK-NEXT: mov s2, v0.s[1]
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmaxnm s1, s2, s1
+; CHECK-NEXT: fminnm s0, s0, s3
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fminnm s1, s1, s3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i8> @llvm.fptoui.sat.v2f32.v2i8(<2 x float> %f)
+ ret <2 x i8> %x
+}
+
+define <2 x i13> @test_unsigned_v2f32_v2i13(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i13:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #63488
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #17919, lsl #16
+; CHECK-NEXT: mov s2, v0.s[1]
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmaxnm s1, s2, s1
+; CHECK-NEXT: fminnm s0, s0, s3
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fminnm s1, s1, s3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i13> @llvm.fptoui.sat.v2f32.v2i13(<2 x float> %f)
+ ret <2 x i13> %x
+}
+
+define <2 x i16> @test_unsigned_v2f32_v2i16(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #65280
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #18303, lsl #16
+; CHECK-NEXT: mov s2, v0.s[1]
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmaxnm s1, s2, s1
+; CHECK-NEXT: fminnm s0, s0, s3
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fminnm s1, s1, s3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i16> @llvm.fptoui.sat.v2f32.v2i16(<2 x float> %f)
+ ret <2 x i16> %x
+}
+
+define <2 x i19> @test_unsigned_v2f32_v2i19(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i19:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov w8, #65504
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fmov s1, wzr
+; CHECK-NEXT: movk w8, #18687, lsl #16
+; CHECK-NEXT: mov s2, v0.s[1]
+; CHECK-NEXT: fmaxnm s0, s0, s1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fmaxnm s1, s2, s1
+; CHECK-NEXT: fminnm s0, s0, s3
+; CHECK-NEXT: fcvtzu w8, s0
+; CHECK-NEXT: fminnm s1, s1, s3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i19> @llvm.fptoui.sat.v2f32.v2i19(<2 x float> %f)
+ ret <2 x i19> %x
+}
+
+define <2 x i32> @test_unsigned_v2f32_v2i32_duplicate(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i32_duplicate:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu w9, s0
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptoui.sat.v2f32.v2i32(<2 x float> %f)
+ ret <2 x i32> %x
+}
+
+define <2 x i50> @test_unsigned_v2f32_v2i50(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i50:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1484783615
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fcvtzu x8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: mov x9, #1125899906842623
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu x10, s0
+; CHECK-NEXT: csel x8, x9, x8, gt
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel x10, xzr, x10, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel x9, x9, x10, gt
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: ret
+ %x = call <2 x i50> @llvm.fptoui.sat.v2f32.v2i50(<2 x float> %f)
+ ret <2 x i50> %x
+}
+
+define <2 x i64> @test_unsigned_v2f32_v2i64(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s1, v0.s[1]
+; CHECK-NEXT: mov w8, #1602224127
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fcvtzu x8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu x9, s0
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel x9, xzr, x9, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: ret
+ %x = call <2 x i64> @llvm.fptoui.sat.v2f32.v2i64(<2 x float> %f)
+ ret <2 x i64> %x
+}
+
+define <2 x i100> @test_unsigned_v2f32_v2i100(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i100:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #64 // =64
+; CHECK-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: stp x30, x21, [sp, #32] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 64
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: .cfi_offset b8, -40
+; CHECK-NEXT: .cfi_offset b9, -48
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s8, v0.s[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: mov w8, #1904214015
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov x21, #68719476735
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: csel x10, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csel x19, x21, x10, gt
+; CHECK-NEXT: csinv x20, x9, xzr, le
+; CHECK-NEXT: // kill: def $s0 killed $s0 killed $q0
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x20
+; CHECK-NEXT: mov x3, x19
+; CHECK-NEXT: ldp x20, x19, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp s0, s9
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: csel x1, x21, x9, gt
+; CHECK-NEXT: ldp x30, x21, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #64 // =64
+; CHECK-NEXT: ret
+ %x = call <2 x i100> @llvm.fptoui.sat.v2f32.v2i100(<2 x float> %f)
+ ret <2 x i100> %x
+}
+
+define <2 x i128> @test_unsigned_v2f32_v2i128(<2 x float> %f) {
+; CHECK-LABEL: test_unsigned_v2f32_v2i128:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #64 // =64
+; CHECK-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #32] // 8-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 64
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: .cfi_offset b8, -40
+; CHECK-NEXT: .cfi_offset b9, -48
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov s8, v0.s[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: mov w8, #2139095039
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csinv x19, x10, xzr, le
+; CHECK-NEXT: csinv x20, x9, xzr, le
+; CHECK-NEXT: // kill: def $s0 killed $s0 killed $q0
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: ldp x20, x19, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp s0, s9
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: ldr x30, [sp, #32] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: csinv x1, x9, xzr, le
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #64 // =64
+; CHECK-NEXT: ret
+ %x = call <2 x i128> @llvm.fptoui.sat.v2f32.v2i128(<2 x float> %f)
+ ret <2 x i128> %x
+}
+
+;
+; 2-Vector double to unsigned integer -- result size variation
+;
+
+declare <2 x i1> @llvm.fptoui.sat.v2f64.v2i1 (<2 x double>)
+declare <2 x i8> @llvm.fptoui.sat.v2f64.v2i8 (<2 x double>)
+declare <2 x i13> @llvm.fptoui.sat.v2f64.v2i13 (<2 x double>)
+declare <2 x i16> @llvm.fptoui.sat.v2f64.v2i16 (<2 x double>)
+declare <2 x i19> @llvm.fptoui.sat.v2f64.v2i19 (<2 x double>)
+declare <2 x i50> @llvm.fptoui.sat.v2f64.v2i50 (<2 x double>)
+declare <2 x i64> @llvm.fptoui.sat.v2f64.v2i64 (<2 x double>)
+declare <2 x i100> @llvm.fptoui.sat.v2f64.v2i100(<2 x double>)
+declare <2 x i128> @llvm.fptoui.sat.v2f64.v2i128(<2 x double>)
+
+define <2 x i1> @test_unsigned_v2f64_v2i1(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i1:
+; CHECK: // %bb.0:
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: fmov d2, #1.00000000
+; CHECK-NEXT: mov d3, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmaxnm d1, d3, d1
+; CHECK-NEXT: fminnm d0, d0, d2
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i1> @llvm.fptoui.sat.v2f64.v2i1(<2 x double> %f)
+ ret <2 x i1> %x
+}
+
+define <2 x i8> @test_unsigned_v2f64_v2i8(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i8:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #246290604621824
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16495, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i8> @llvm.fptoui.sat.v2f64.v2i8(<2 x double> %f)
+ ret <2 x i8> %x
+}
+
+define <2 x i13> @test_unsigned_v2f64_v2i13(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i13:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #280375465082880
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16575, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i13> @llvm.fptoui.sat.v2f64.v2i13(<2 x double> %f)
+ ret <2 x i13> %x
+}
+
+define <2 x i16> @test_unsigned_v2f64_v2i16(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281337537757184
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16623, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i16> @llvm.fptoui.sat.v2f64.v2i16(<2 x double> %f)
+ ret <2 x i16> %x
+}
+
+define <2 x i19> @test_unsigned_v2f64_v2i19(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i19:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281457796841472
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16671, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i19> @llvm.fptoui.sat.v2f64.v2i19(<2 x double> %f)
+ ret <2 x i19> %x
+}
+
+define <2 x i32> @test_unsigned_v2f64_v2i32_duplicate(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i32_duplicate:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #281474974613504
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #16879, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu w8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, d1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <2 x i32> @llvm.fptoui.sat.v2f64.v2i32(<2 x double> %f)
+ ret <2 x i32> %x
+}
+
+define <2 x i50> @test_unsigned_v2f64_v2i50(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i50:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov x8, #-8
+; CHECK-NEXT: fmov d1, xzr
+; CHECK-NEXT: movk x8, #17167, lsl #48
+; CHECK-NEXT: mov d2, v0.d[1]
+; CHECK-NEXT: fmaxnm d0, d0, d1
+; CHECK-NEXT: fmov d3, x8
+; CHECK-NEXT: fmaxnm d1, d2, d1
+; CHECK-NEXT: fminnm d0, d0, d3
+; CHECK-NEXT: fcvtzu x8, d0
+; CHECK-NEXT: fminnm d1, d1, d3
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: fcvtzu x8, d1
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: ret
+ %x = call <2 x i50> @llvm.fptoui.sat.v2f64.v2i50(<2 x double> %f)
+ ret <2 x i50> %x
+}
+
+define <2 x i64> @test_unsigned_v2f64_v2i64(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: mov d1, v0.d[1]
+; CHECK-NEXT: mov x8, #4895412794951729151
+; CHECK-NEXT: fmov d2, x8
+; CHECK-NEXT: fcvtzu x8, d1
+; CHECK-NEXT: fcmp d1, #0.0
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp d1, d2
+; CHECK-NEXT: fcvtzu x9, d0
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp d0, #0.0
+; CHECK-NEXT: csel x9, xzr, x9, lt
+; CHECK-NEXT: fcmp d0, d2
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: ret
+ %x = call <2 x i64> @llvm.fptoui.sat.v2f64.v2i64(<2 x double> %f)
+ ret <2 x i64> %x
+}
+
+define <2 x i100> @test_unsigned_v2f64_v2i100(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i100:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #64 // =64
+; CHECK-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: stp x30, x21, [sp, #32] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 64
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: .cfi_offset b8, -40
+; CHECK-NEXT: .cfi_offset b9, -48
+; CHECK-NEXT: mov d8, v0.d[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixunsdfti
+; CHECK-NEXT: mov x8, #5057542381537067007
+; CHECK-NEXT: fcmp d8, #0.0
+; CHECK-NEXT: fmov d9, x8
+; CHECK-NEXT: mov x21, #68719476735
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: csel x10, xzr, x1, lt
+; CHECK-NEXT: fcmp d8, d9
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csel x19, x21, x10, gt
+; CHECK-NEXT: csinv x20, x9, xzr, le
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: bl __fixunsdfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x20
+; CHECK-NEXT: mov x3, x19
+; CHECK-NEXT: ldp x20, x19, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp d0, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp d0, d9
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: csel x1, x21, x9, gt
+; CHECK-NEXT: ldp x30, x21, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #64 // =64
+; CHECK-NEXT: ret
+ %x = call <2 x i100> @llvm.fptoui.sat.v2f64.v2i100(<2 x double> %f)
+ ret <2 x i100> %x
+}
+
+define <2 x i128> @test_unsigned_v2f64_v2i128(<2 x double> %f) {
+; CHECK-LABEL: test_unsigned_v2f64_v2i128:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #64 // =64
+; CHECK-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #32] // 8-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 64
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w30, -32
+; CHECK-NEXT: .cfi_offset b8, -40
+; CHECK-NEXT: .cfi_offset b9, -48
+; CHECK-NEXT: mov d8, v0.d[1]
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixunsdfti
+; CHECK-NEXT: mov x8, #5183643171103440895
+; CHECK-NEXT: fcmp d8, #0.0
+; CHECK-NEXT: fmov d9, x8
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp d8, d9
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: csinv x19, x10, xzr, le
+; CHECK-NEXT: csinv x20, x9, xzr, le
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: bl __fixunsdfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: ldp x20, x19, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp d0, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp d0, d9
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: ldr x30, [sp, #32] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: csinv x1, x9, xzr, le
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #64 // =64
+; CHECK-NEXT: ret
+ %x = call <2 x i128> @llvm.fptoui.sat.v2f64.v2i128(<2 x double> %f)
+ ret <2 x i128> %x
+}
+
+;
+; 4-Vector half to unsigned integer -- result size variation
+;
+
+declare <4 x i1> @llvm.fptoui.sat.v4f16.v4i1 (<4 x half>)
+declare <4 x i8> @llvm.fptoui.sat.v4f16.v4i8 (<4 x half>)
+declare <4 x i13> @llvm.fptoui.sat.v4f16.v4i13 (<4 x half>)
+declare <4 x i16> @llvm.fptoui.sat.v4f16.v4i16 (<4 x half>)
+declare <4 x i19> @llvm.fptoui.sat.v4f16.v4i19 (<4 x half>)
+declare <4 x i50> @llvm.fptoui.sat.v4f16.v4i50 (<4 x half>)
+declare <4 x i64> @llvm.fptoui.sat.v4f16.v4i64 (<4 x half>)
+declare <4 x i100> @llvm.fptoui.sat.v4f16.v4i100(<4 x half>)
+declare <4 x i128> @llvm.fptoui.sat.v4f16.v4i128(<4 x half>)
+
+define <4 x i1> @test_unsigned_v4f16_v4i1(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i1:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov h3, v0.h[1]
+; CHECK-NEXT: mov h4, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmaxnm s1, s1, s2
+; CHECK-NEXT: fmaxnm s3, s3, s2
+; CHECK-NEXT: fmaxnm s4, s4, s2
+; CHECK-NEXT: fmaxnm s0, s0, s2
+; CHECK-NEXT: fmov s2, #1.00000000
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fminnm s1, s3, s2
+; CHECK-NEXT: fminnm s3, s4, s2
+; CHECK-NEXT: fminnm s2, s0, s2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.h[1], w8
+; CHECK-NEXT: fcvtzu w8, s3
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i1> @llvm.fptoui.sat.v4f16.v4i1(<4 x half> %f)
+ ret <4 x i1> %x
+}
+
+define <4 x i8> @test_unsigned_v4f16_v4i8(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i8:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov h3, v0.h[1]
+; CHECK-NEXT: mov h4, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: mov w8, #1132396544
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmaxnm s1, s1, s2
+; CHECK-NEXT: fmaxnm s3, s3, s2
+; CHECK-NEXT: fmaxnm s4, s4, s2
+; CHECK-NEXT: fmaxnm s0, s0, s2
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fminnm s1, s3, s2
+; CHECK-NEXT: fminnm s3, s4, s2
+; CHECK-NEXT: fminnm s2, s0, s2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.h[1], w8
+; CHECK-NEXT: fcvtzu w8, s3
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i8> @llvm.fptoui.sat.v4f16.v4i8(<4 x half> %f)
+ ret <4 x i8> %x
+}
+
+define <4 x i13> @test_unsigned_v4f16_v4i13(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i13:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w8, #63488
+; CHECK-NEXT: mov h3, v0.h[1]
+; CHECK-NEXT: mov h4, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: movk w8, #17919, lsl #16
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmaxnm s1, s1, s2
+; CHECK-NEXT: fmaxnm s3, s3, s2
+; CHECK-NEXT: fmaxnm s4, s4, s2
+; CHECK-NEXT: fmaxnm s0, s0, s2
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fminnm s1, s3, s2
+; CHECK-NEXT: fminnm s3, s4, s2
+; CHECK-NEXT: fminnm s2, s0, s2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.h[1], w8
+; CHECK-NEXT: fcvtzu w8, s3
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i13> @llvm.fptoui.sat.v4f16.v4i13(<4 x half> %f)
+ ret <4 x i13> %x
+}
+
+define <4 x i16> @test_unsigned_v4f16_v4i16(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i16:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w8, #65280
+; CHECK-NEXT: mov h3, v0.h[1]
+; CHECK-NEXT: mov h4, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: movk w8, #18303, lsl #16
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmaxnm s1, s1, s2
+; CHECK-NEXT: fmaxnm s3, s3, s2
+; CHECK-NEXT: fmaxnm s4, s4, s2
+; CHECK-NEXT: fmaxnm s0, s0, s2
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fminnm s1, s3, s2
+; CHECK-NEXT: fminnm s3, s4, s2
+; CHECK-NEXT: fminnm s2, s0, s2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.h[1], w8
+; CHECK-NEXT: fcvtzu w8, s3
+; CHECK-NEXT: mov v0.h[2], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: mov v0.h[3], w8
+; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT: ret
+ %x = call <4 x i16> @llvm.fptoui.sat.v4f16.v4i16(<4 x half> %f)
+ ret <4 x i16> %x
+}
+
+define <4 x i19> @test_unsigned_v4f16_v4i19(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i19:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w8, #65504
+; CHECK-NEXT: mov h3, v0.h[1]
+; CHECK-NEXT: mov h4, v0.h[2]
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fmov s2, wzr
+; CHECK-NEXT: movk w8, #18687, lsl #16
+; CHECK-NEXT: fcvt s3, h3
+; CHECK-NEXT: fcvt s4, h4
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: fmaxnm s1, s1, s2
+; CHECK-NEXT: fmaxnm s3, s3, s2
+; CHECK-NEXT: fmaxnm s4, s4, s2
+; CHECK-NEXT: fmaxnm s0, s0, s2
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: fminnm s1, s1, s2
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fminnm s1, s3, s2
+; CHECK-NEXT: fminnm s3, s4, s2
+; CHECK-NEXT: fminnm s2, s0, s2
+; CHECK-NEXT: fmov s0, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s3
+; CHECK-NEXT: mov v0.s[2], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i19> @llvm.fptoui.sat.v4f16.v4i19(<4 x half> %f)
+ ret <4 x i19> %x
+}
+
+define <4 x i32> @test_unsigned_v4f16_v4i32_duplicate(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i32_duplicate:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #1333788671
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu w8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fcvtzu w9, s2
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: fmov s0, w9
+; CHECK-NEXT: fcvtzu w9, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel w9, wzr, w9, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov v0.s[1], w8
+; CHECK-NEXT: fcvtzu w8, s2
+; CHECK-NEXT: csinv w9, w9, wzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: csel w8, wzr, w8, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov v0.s[2], w9
+; CHECK-NEXT: csinv w8, w8, wzr, le
+; CHECK-NEXT: mov v0.s[3], w8
+; CHECK-NEXT: ret
+ %x = call <4 x i32> @llvm.fptoui.sat.v4f16.v4i32(<4 x half> %f)
+ ret <4 x i32> %x
+}
+
+define <4 x i50> @test_unsigned_v4f16_v4i50(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i50:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: fcvt s1, h0
+; CHECK-NEXT: mov w8, #1484783615
+; CHECK-NEXT: fcvtzu x10, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fmov s2, w8
+; CHECK-NEXT: csel x8, xzr, x10, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov x9, #1125899906842623
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fcvtzu x10, s1
+; CHECK-NEXT: csel x0, x9, x8, gt
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: csel x8, xzr, x10, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcvtzu x10, s1
+; CHECK-NEXT: csel x1, x9, x8, gt
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s0, h0
+; CHECK-NEXT: csel x8, xzr, x10, lt
+; CHECK-NEXT: fcmp s1, s2
+; CHECK-NEXT: fcvtzu x11, s0
+; CHECK-NEXT: csel x2, x9, x8, gt
+; CHECK-NEXT: fcmp s0, #0.0
+; CHECK-NEXT: csel x8, xzr, x11, lt
+; CHECK-NEXT: fcmp s0, s2
+; CHECK-NEXT: csel x3, x9, x8, gt
+; CHECK-NEXT: ret
+ %x = call <4 x i50> @llvm.fptoui.sat.v4f16.v4i50(<4 x half> %f)
+ ret <4 x i50> %x
+}
+
+define <4 x i64> @test_unsigned_v4f16_v4i64(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i64:
+; CHECK: // %bb.0:
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: mov w8, #1602224127
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: fmov s3, w8
+; CHECK-NEXT: fcvtzu x8, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s2, h0
+; CHECK-NEXT: csel x8, xzr, x8, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: mov h1, v0.h[3]
+; CHECK-NEXT: fcvtzu x9, s2
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: fcmp s2, #0.0
+; CHECK-NEXT: fcvt s1, h1
+; CHECK-NEXT: csel x9, xzr, x9, lt
+; CHECK-NEXT: fcmp s2, s3
+; CHECK-NEXT: mov h0, v0.h[2]
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: fcvtzu x10, s1
+; CHECK-NEXT: fcmp s1, #0.0
+; CHECK-NEXT: fcvt s4, h0
+; CHECK-NEXT: csel x10, xzr, x10, lt
+; CHECK-NEXT: fcmp s1, s3
+; CHECK-NEXT: fmov d0, x9
+; CHECK-NEXT: fcvtzu x9, s4
+; CHECK-NEXT: csinv x10, x10, xzr, le
+; CHECK-NEXT: fcmp s4, #0.0
+; CHECK-NEXT: csel x9, xzr, x9, lt
+; CHECK-NEXT: fcmp s4, s3
+; CHECK-NEXT: csinv x9, x9, xzr, le
+; CHECK-NEXT: fmov d1, x9
+; CHECK-NEXT: mov v0.d[1], x8
+; CHECK-NEXT: mov v1.d[1], x10
+; CHECK-NEXT: ret
+ %x = call <4 x i64> @llvm.fptoui.sat.v4f16.v4i64(<4 x half> %f)
+ ret <4 x i64> %x
+}
+
+define <4 x i100> @test_unsigned_v4f16_v4i100(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i100:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #96 // =96
+; CHECK-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: stp x30, x25, [sp, #32] // 16-byte Folded Spill
+; CHECK-NEXT: stp x24, x23, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 96
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w23, -40
+; CHECK-NEXT: .cfi_offset w24, -48
+; CHECK-NEXT: .cfi_offset w25, -56
+; CHECK-NEXT: .cfi_offset w30, -64
+; CHECK-NEXT: .cfi_offset b8, -72
+; CHECK-NEXT: .cfi_offset b9, -80
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[2]
+; CHECK-NEXT: fcvt s8, h1
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w8, #1904214015
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov h0, v0.h[1]
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: csel x10, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov x25, #68719476735
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x19, x25, x10, gt
+; CHECK-NEXT: csinv x20, x9, xzr, le
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x21, x25, x9, gt
+; CHECK-NEXT: csinv x22, x8, xzr, le
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csel x23, x25, x9, gt
+; CHECK-NEXT: csinv x24, x8, xzr, le
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: csel x1, x25, x9, gt
+; CHECK-NEXT: mov x2, x22
+; CHECK-NEXT: mov x3, x21
+; CHECK-NEXT: mov x4, x20
+; CHECK-NEXT: mov x5, x19
+; CHECK-NEXT: mov x6, x24
+; CHECK-NEXT: mov x7, x23
+; CHECK-NEXT: ldp x20, x19, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x24, x23, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x30, x25, [sp, #32] // 16-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #96 // =96
+; CHECK-NEXT: ret
+ %x = call <4 x i100> @llvm.fptoui.sat.v4f16.v4i100(<4 x half> %f)
+ ret <4 x i100> %x
+}
+
+define <4 x i128> @test_unsigned_v4f16_v4i128(<4 x half> %f) {
+; CHECK-LABEL: test_unsigned_v4f16_v4i128:
+; CHECK: // %bb.0:
+; CHECK-NEXT: sub sp, sp, #96 // =96
+; CHECK-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NEXT: str x30, [sp, #32] // 8-byte Folded Spill
+; CHECK-NEXT: stp x24, x23, [sp, #48] // 16-byte Folded Spill
+; CHECK-NEXT: stp x22, x21, [sp, #64] // 16-byte Folded Spill
+; CHECK-NEXT: stp x20, x19, [sp, #80] // 16-byte Folded Spill
+; CHECK-NEXT: .cfi_def_cfa_offset 96
+; CHECK-NEXT: .cfi_offset w19, -8
+; CHECK-NEXT: .cfi_offset w20, -16
+; CHECK-NEXT: .cfi_offset w21, -24
+; CHECK-NEXT: .cfi_offset w22, -32
+; CHECK-NEXT: .cfi_offset w23, -40
+; CHECK-NEXT: .cfi_offset w24, -48
+; CHECK-NEXT: .cfi_offset w30, -64
+; CHECK-NEXT: .cfi_offset b8, -72
+; CHECK-NEXT: .cfi_offset b9, -80
+; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
+; CHECK-NEXT: mov h1, v0.h[1]
+; CHECK-NEXT: fcvt s8, h1
+; CHECK-NEXT: str q0, [sp] // 16-byte Folded Spill
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: mov w8, #2139095039
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: fmov s9, w8
+; CHECK-NEXT: mov h0, v0.h[2]
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: csel x10, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csinv x19, x10, xzr, le
+; CHECK-NEXT: csinv x20, x9, xzr, le
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: csel x8, xzr, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: mov h0, v0.h[3]
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csinv x21, x9, xzr, le
+; CHECK-NEXT: csinv x22, x8, xzr, le
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: ldr q0, [sp] // 16-byte Folded Reload
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: csel x8, xzr, x1, lt
+; CHECK-NEXT: csel x9, xzr, x0, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: fcvt s8, h0
+; CHECK-NEXT: mov v0.16b, v8.16b
+; CHECK-NEXT: csinv x23, x9, xzr, le
+; CHECK-NEXT: csinv x24, x8, xzr, le
+; CHECK-NEXT: bl __fixunssfti
+; CHECK-NEXT: fcmp s8, #0.0
+; CHECK-NEXT: csel x8, xzr, x0, lt
+; CHECK-NEXT: csel x9, xzr, x1, lt
+; CHECK-NEXT: fcmp s8, s9
+; CHECK-NEXT: csinv x8, x8, xzr, le
+; CHECK-NEXT: mov x2, x19
+; CHECK-NEXT: mov x3, x20
+; CHECK-NEXT: mov x4, x21
+; CHECK-NEXT: mov x5, x22
+; CHECK-NEXT: mov x6, x23
+; CHECK-NEXT: mov x7, x24
+; CHECK-NEXT: ldp x20, x19, [sp, #80] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x22, x21, [sp, #64] // 16-byte Folded Reload
+; CHECK-NEXT: ldp x24, x23, [sp, #48] // 16-byte Folded Reload
+; CHECK-NEXT: ldr x30, [sp, #32] // 8-byte Folded Reload
+; CHECK-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NEXT: csinv x1, x9, xzr, le
+; CHECK-NEXT: fmov d0, x8
+; CHECK-NEXT: mov v0.d[1], x1
+; CHECK-NEXT: fmov x0, d0
+; CHECK-NEXT: add sp, sp, #96 // =96
+; CHECK-NEXT: ret
+ %x = call <4 x i128> @llvm.fptoui.sat.v4f16.v4i128(<4 x half> %f)
+ ret <4 x i128> %x
+}
+
diff --git a/llvm/test/CodeGen/ARM/fptosi-sat-scalar.ll b/llvm/test/CodeGen/ARM/fptosi-sat-scalar.ll
new file mode 100644
index 000000000000..6a0e38f744d0
--- /dev/null
+++ b/llvm/test/CodeGen/ARM/fptosi-sat-scalar.ll
@@ -0,0 +1,2812 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=arm-eabi -float-abi=soft %s -o - | FileCheck %s --check-prefixes=SOFT
+; RUN: llc -mtriple=arm-eabi -mattr=+vfp2 %s -o - | FileCheck %s --check-prefixes=VFP2
+
+;
+; 32-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f32 (float)
+declare i8 @llvm.fptosi.sat.i8.f32 (float)
+declare i13 @llvm.fptosi.sat.i13.f32 (float)
+declare i16 @llvm.fptosi.sat.i16.f32 (float)
+declare i19 @llvm.fptosi.sat.i19.f32 (float)
+declare i32 @llvm.fptosi.sat.i32.f32 (float)
+declare i50 @llvm.fptosi.sat.i50.f32 (float)
+declare i64 @llvm.fptosi.sat.i64.f32 (float)
+declare i100 @llvm.fptosi.sat.i100.f32(float)
+declare i128 @llvm.fptosi.sat.i128.f32(float)
+
+define i1 @test_signed_i1_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i1_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov r1, #0
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r1, #1065353216
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: orr r1, r1, #-2147483648
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: cmp r6, #0
+; SOFT-NEXT: mvneq r7, #0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: mov r0, r7
+; SOFT-NEXT: pop {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i1_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI0_0
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, #0
+; VFP2-NEXT: vmov r0, s4
+; VFP2-NEXT: mvnlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: movgt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI0_0:
+; VFP2-NEXT: .long 0xbf800000 @ float -1
+ %x = call i1 @llvm.fptosi.sat.i1.f32(float %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i8_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov r1, #16646144
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: orr r1, r1, #1107296256
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-1023410176
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: cmp r6, #0
+; SOFT-NEXT: mvneq r7, #127
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: movne r7, #127
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: mov r0, r7
+; SOFT-NEXT: pop {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i8_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI1_0
+; VFP2-NEXT: vldr s6, .LCPI1_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r0, s4
+; VFP2-NEXT: mvnlt r0, #127
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: movgt r0, #127
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI1_0:
+; VFP2-NEXT: .long 0xc3000000 @ float -128
+; VFP2-NEXT: .LCPI1_1:
+; VFP2-NEXT: .long 0x42fe0000 @ float 127
+ %x = call i8 @llvm.fptosi.sat.i8.f32(float %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i13_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #92274688
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: orr r1, r1, #-1073741824
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: ldr r0, .LCPI2_0
+; SOFT-NEXT: ldr r1, .LCPI2_1
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #3840
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI2_0:
+; SOFT-NEXT: .long 4294963200 @ 0xfffff000
+; SOFT-NEXT: .LCPI2_1:
+; SOFT-NEXT: .long 1166012416 @ 0x457ff000
+;
+; VFP2-LABEL: test_signed_i13_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI2_0
+; VFP2-NEXT: vldr s6, .LCPI2_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: ldr r0, .LCPI2_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r1, s4
+; VFP2-NEXT: movlt r1, r0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: mov r0, #255
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: orr r0, r0, #3840
+; VFP2-NEXT: movle r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI2_0:
+; VFP2-NEXT: .long 0xc5800000 @ float -4096
+; VFP2-NEXT: .LCPI2_1:
+; VFP2-NEXT: .long 0x457ff000 @ float 4095
+; VFP2-NEXT: .LCPI2_2:
+; VFP2-NEXT: .long 4294963200 @ 0xfffff000
+ %x = call i13 @llvm.fptosi.sat.i13.f32(float %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i16_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #-956301312
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: ldr r0, .LCPI3_0
+; SOFT-NEXT: ldr r1, .LCPI3_1
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #32512
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI3_0:
+; SOFT-NEXT: .long 4294934528 @ 0xffff8000
+; SOFT-NEXT: .LCPI3_1:
+; SOFT-NEXT: .long 1191181824 @ 0x46fffe00
+;
+; VFP2-LABEL: test_signed_i16_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI3_0
+; VFP2-NEXT: vldr s6, .LCPI3_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: ldr r0, .LCPI3_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r1, s4
+; VFP2-NEXT: movlt r1, r0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: mov r0, #255
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: orr r0, r0, #32512
+; VFP2-NEXT: movle r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI3_0:
+; VFP2-NEXT: .long 0xc7000000 @ float -32768
+; VFP2-NEXT: .LCPI3_1:
+; VFP2-NEXT: .long 0x46fffe00 @ float 32767
+; VFP2-NEXT: .LCPI3_2:
+; VFP2-NEXT: .long 4294934528 @ 0xffff8000
+ %x = call i16 @llvm.fptosi.sat.i16.f32(float %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i19_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #142606336
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: orr r1, r1, #-1073741824
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, #66846720
+; SOFT-NEXT: orr r0, r0, #-67108864
+; SOFT-NEXT: ldr r1, .LCPI4_0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: ldr r1, .LCPI4_1
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI4_0:
+; SOFT-NEXT: .long 1216348096 @ 0x487fffc0
+; SOFT-NEXT: .LCPI4_1:
+; SOFT-NEXT: .long 262143 @ 0x3ffff
+;
+; VFP2-LABEL: test_signed_i19_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s6, .LCPI4_2
+; VFP2-NEXT: vldr s2, .LCPI4_0
+; VFP2-NEXT: mov r0, #66846720
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: orr r0, r0, #-67108864
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: ldr r1, .LCPI4_1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmov r2, s4
+; VFP2-NEXT: movge r0, r2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: movgt r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI4_0:
+; VFP2-NEXT: .long 0x487fffc0 @ float 262143
+; VFP2-NEXT: .LCPI4_1:
+; VFP2-NEXT: .long 262143 @ 0x3ffff
+; VFP2-NEXT: .LCPI4_2:
+; VFP2-NEXT: .long 0xc8800000 @ float -262144
+ %x = call i19 @llvm.fptosi.sat.i19.f32(float %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i32_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mvn r1, #-1325400064
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-822083584
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: cmp r6, #0
+; SOFT-NEXT: moveq r7, #-2147483648
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r7, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: mov r0, r7
+; SOFT-NEXT: pop {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i32_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI5_0
+; VFP2-NEXT: vldr s6, .LCPI5_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r0, s4
+; VFP2-NEXT: movlt r0, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: mvngt r0, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI5_0:
+; VFP2-NEXT: .long 0xcf000000 @ float -2.14748365E+9
+; VFP2-NEXT: .LCPI5_1:
+; VFP2-NEXT: .long 0x4effffff @ float 2.14748352E+9
+ %x = call i32 @llvm.fptosi.sat.i32.f32(float %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i50_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mvn r1, #-1476395008
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-671088640
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2lz
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: moveq r5, r7
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-671088640
+; SOFT-NEXT: movne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r1, #16646144
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #-16777216
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: moveq r6, r1
+; SOFT-NEXT: mvn r1, #-1476395008
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: ldr r1, .LCPI6_0
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI6_0:
+; SOFT-NEXT: .long 131071 @ 0x1ffff
+;
+; VFP2-LABEL: test_signed_i50_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: .vsave {d8, d9}
+; VFP2-NEXT: vpush {d8, d9}
+; VFP2-NEXT: vldr s16, .LCPI6_0
+; VFP2-NEXT: vmov s18, r0
+; VFP2-NEXT: bl __aeabi_f2lz
+; VFP2-NEXT: vcmp.f32 s18, s16
+; VFP2-NEXT: mov r2, #16646144
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: orr r2, r2, #-16777216
+; VFP2-NEXT: vldr s0, .LCPI6_1
+; VFP2-NEXT: ldr r3, .LCPI6_2
+; VFP2-NEXT: vcmp.f32 s18, s0
+; VFP2-NEXT: movlt r1, r2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s18
+; VFP2-NEXT: movgt r1, r3
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s16
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s0
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s18
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vpop {d8, d9}
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI6_0:
+; VFP2-NEXT: .long 0xd8000000 @ float -5.62949953E+14
+; VFP2-NEXT: .LCPI6_1:
+; VFP2-NEXT: .long 0x57ffffff @ float 5.6294992E+14
+; VFP2-NEXT: .LCPI6_2:
+; VFP2-NEXT: .long 131071 @ 0x1ffff
+ %x = call i50 @llvm.fptosi.sat.i50.f32(float %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i64_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mvn r1, #-1593835520
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-553648128
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2lz
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: moveq r5, r7
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1593835520
+; SOFT-NEXT: movne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-553648128
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: moveq r6, #-2147483648
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r6, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i64_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, lr}
+; VFP2-NEXT: push {r4, lr}
+; VFP2-NEXT: mov r4, r0
+; VFP2-NEXT: bl __aeabi_f2lz
+; VFP2-NEXT: vldr s0, .LCPI7_0
+; VFP2-NEXT: vmov s2, r4
+; VFP2-NEXT: vldr s4, .LCPI7_1
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r1, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r1, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: pop {r4, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI7_0:
+; VFP2-NEXT: .long 0xdf000000 @ float -9.22337203E+18
+; VFP2-NEXT: .LCPI7_1:
+; VFP2-NEXT: .long 0x5effffff @ float 9.22337149E+18
+ %x = call i64 @llvm.fptosi.sat.i64.f32(float %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i100_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r10, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r10, lr}
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r9, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __fixsfti
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: moveq r10, r5
+; SOFT-NEXT: cmp r9, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r7, r2
+; SOFT-NEXT: mov r8, r3
+; SOFT-NEXT: mvnne r10, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: movne r10, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r7, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvnne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvneq r8, #7
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: movne r8, #7
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r10
+; SOFT-NEXT: movne r8, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: mov r2, r7
+; SOFT-NEXT: mov r3, r8
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r10, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i100_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, lr}
+; VFP2-NEXT: push {r4, lr}
+; VFP2-NEXT: mov r4, r0
+; VFP2-NEXT: bl __fixsfti
+; VFP2-NEXT: vldr s0, .LCPI8_0
+; VFP2-NEXT: vmov s2, r4
+; VFP2-NEXT: vldr s4, .LCPI8_1
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: mvnlt r3, #7
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: movgt r3, #7
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r3, #0
+; VFP2-NEXT: pop {r4, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI8_0:
+; VFP2-NEXT: .long 0xf1000000 @ float -6.338253E+29
+; VFP2-NEXT: .LCPI8_1:
+; VFP2-NEXT: .long 0x70ffffff @ float 6.33825262E+29
+ %x = call i100 @llvm.fptosi.sat.i100.f32(float %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f32(float %f) nounwind {
+; SOFT-LABEL: test_signed_i128_f32:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r10, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r10, lr}
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r9, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __fixsfti
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: moveq r10, r5
+; SOFT-NEXT: cmp r9, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r7, r2
+; SOFT-NEXT: mov r8, r3
+; SOFT-NEXT: mvnne r10, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: movne r10, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r7, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvnne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: moveq r8, #-2147483648
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r8, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r10
+; SOFT-NEXT: movne r8, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: mov r2, r7
+; SOFT-NEXT: mov r3, r8
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r10, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i128_f32:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, lr}
+; VFP2-NEXT: push {r4, lr}
+; VFP2-NEXT: mov r4, r0
+; VFP2-NEXT: bl __fixsfti
+; VFP2-NEXT: vldr s0, .LCPI9_0
+; VFP2-NEXT: vmov s2, r4
+; VFP2-NEXT: vldr s4, .LCPI9_1
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r3, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r3, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r3, #0
+; VFP2-NEXT: pop {r4, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI9_0:
+; VFP2-NEXT: .long 0xff000000 @ float -1.70141183E+38
+; VFP2-NEXT: .LCPI9_1:
+; VFP2-NEXT: .long 0x7effffff @ float 1.70141173E+38
+ %x = call i128 @llvm.fptosi.sat.i128.f32(float %f)
+ ret i128 %x
+}
+
+;
+; 64-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f64 (double)
+declare i8 @llvm.fptosi.sat.i8.f64 (double)
+declare i13 @llvm.fptosi.sat.i13.f64 (double)
+declare i16 @llvm.fptosi.sat.i16.f64 (double)
+declare i19 @llvm.fptosi.sat.i19.f64 (double)
+declare i32 @llvm.fptosi.sat.i32.f64 (double)
+declare i50 @llvm.fptosi.sat.i50.f64 (double)
+declare i64 @llvm.fptosi.sat.i64.f64 (double)
+declare i100 @llvm.fptosi.sat.i100.f64(double)
+declare i128 @llvm.fptosi.sat.i128.f64(double)
+
+define i1 @test_signed_i1_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i1_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov r3, #267386880
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: orr r3, r3, #-1342177280
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2iz
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: cmp r6, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r3, #0
+; SOFT-NEXT: mvneq r7, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: mov r0, r7
+; SOFT-NEXT: pop {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i1_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vldr d2, .LCPI10_0
+; VFP2-NEXT: vmov d0, r0, r1
+; VFP2-NEXT: vcmp.f64 d0, d2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f64 s2, d0
+; VFP2-NEXT: vcmp.f64 d0, #0
+; VFP2-NEXT: vmov r0, s2
+; VFP2-NEXT: mvnlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d0, d0
+; VFP2-NEXT: movgt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI10_0:
+; VFP2-NEXT: .long 0 @ double -1
+; VFP2-NEXT: .long 3220176896
+ %x = call i1 @llvm.fptosi.sat.i1.f64(double %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i8_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: ldr r3, .LCPI11_0
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #6291456
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mvneq r6, #127
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: movne r6, #127
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI11_0:
+; SOFT-NEXT: .long 1080016896 @ 0x405fc000
+;
+; VFP2-LABEL: test_signed_i8_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vldr d2, .LCPI11_0
+; VFP2-NEXT: vmov d0, r0, r1
+; VFP2-NEXT: vldr d3, .LCPI11_1
+; VFP2-NEXT: vcmp.f64 d0, d2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f64 s2, d0
+; VFP2-NEXT: vcmp.f64 d0, d3
+; VFP2-NEXT: vmov r0, s2
+; VFP2-NEXT: mvnlt r0, #127
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d0, d0
+; VFP2-NEXT: movgt r0, #127
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI11_0:
+; VFP2-NEXT: .long 0 @ double -128
+; VFP2-NEXT: .long 3227516928
+; VFP2-NEXT: .LCPI11_1:
+; VFP2-NEXT: .long 0 @ double 127
+; VFP2-NEXT: .long 1080016896
+ %x = call i8 @llvm.fptosi.sat.i8.f64(double %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i13_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: ldr r3, .LCPI12_0
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #11534336
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: ldr r0, .LCPI12_1
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, #255
+; SOFT-NEXT: orr r0, r0, #3840
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: movne r6, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI12_0:
+; SOFT-NEXT: .long 1085275648 @ 0x40affe00
+; SOFT-NEXT: .LCPI12_1:
+; SOFT-NEXT: .long 4294963200 @ 0xfffff000
+;
+; VFP2-LABEL: test_signed_i13_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vldr d2, .LCPI12_0
+; VFP2-NEXT: vmov d0, r0, r1
+; VFP2-NEXT: vldr d3, .LCPI12_1
+; VFP2-NEXT: vcmp.f64 d0, d2
+; VFP2-NEXT: ldr r0, .LCPI12_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f64 s2, d0
+; VFP2-NEXT: vcmp.f64 d0, d3
+; VFP2-NEXT: vmov r1, s2
+; VFP2-NEXT: movlt r1, r0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d0, d0
+; VFP2-NEXT: mov r0, #255
+; VFP2-NEXT: orr r0, r0, #3840
+; VFP2-NEXT: movle r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI12_0:
+; VFP2-NEXT: .long 0 @ double -4096
+; VFP2-NEXT: .long 3232759808
+; VFP2-NEXT: .LCPI12_1:
+; VFP2-NEXT: .long 0 @ double 4095
+; VFP2-NEXT: .long 1085275648
+; VFP2-NEXT: .LCPI12_2:
+; VFP2-NEXT: .long 4294963200 @ 0xfffff000
+ %x = call i13 @llvm.fptosi.sat.i13.f64(double %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i16_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: ldr r3, .LCPI13_0
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #14680064
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: ldr r0, .LCPI13_1
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, #255
+; SOFT-NEXT: orr r0, r0, #32512
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: movne r6, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI13_0:
+; SOFT-NEXT: .long 1088421824 @ 0x40dfffc0
+; SOFT-NEXT: .LCPI13_1:
+; SOFT-NEXT: .long 4294934528 @ 0xffff8000
+;
+; VFP2-LABEL: test_signed_i16_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vldr d2, .LCPI13_0
+; VFP2-NEXT: vmov d0, r0, r1
+; VFP2-NEXT: vldr d3, .LCPI13_1
+; VFP2-NEXT: vcmp.f64 d0, d2
+; VFP2-NEXT: ldr r0, .LCPI13_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f64 s2, d0
+; VFP2-NEXT: vcmp.f64 d0, d3
+; VFP2-NEXT: vmov r1, s2
+; VFP2-NEXT: movlt r1, r0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d0, d0
+; VFP2-NEXT: mov r0, #255
+; VFP2-NEXT: orr r0, r0, #32512
+; VFP2-NEXT: movle r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI13_0:
+; VFP2-NEXT: .long 0 @ double -32768
+; VFP2-NEXT: .long 3235905536
+; VFP2-NEXT: .LCPI13_1:
+; VFP2-NEXT: .long 0 @ double 32767
+; VFP2-NEXT: .long 1088421824
+; VFP2-NEXT: .LCPI13_2:
+; VFP2-NEXT: .long 4294934528 @ 0xffff8000
+ %x = call i16 @llvm.fptosi.sat.i16.f64(double %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i19_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: ldr r3, .LCPI14_0
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #17825792
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, #66846720
+; SOFT-NEXT: orr r0, r0, #-67108864
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: ldr r0, .LCPI14_1
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: movne r6, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI14_0:
+; SOFT-NEXT: .long 1091567608 @ 0x410ffff8
+; SOFT-NEXT: .LCPI14_1:
+; SOFT-NEXT: .long 262143 @ 0x3ffff
+;
+; VFP2-LABEL: test_signed_i19_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vmov d0, r0, r1
+; VFP2-NEXT: vldr d3, .LCPI14_2
+; VFP2-NEXT: vldr d2, .LCPI14_0
+; VFP2-NEXT: mov r0, #66846720
+; VFP2-NEXT: vcvt.s32.f64 s2, d0
+; VFP2-NEXT: orr r0, r0, #-67108864
+; VFP2-NEXT: ldr r1, .LCPI14_1
+; VFP2-NEXT: vcmp.f64 d0, d3
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vmov r2, s2
+; VFP2-NEXT: vcmp.f64 d0, d2
+; VFP2-NEXT: movge r0, r2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d0, d0
+; VFP2-NEXT: movgt r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI14_0:
+; VFP2-NEXT: .long 0 @ double 262143
+; VFP2-NEXT: .long 1091567608
+; VFP2-NEXT: .LCPI14_2:
+; VFP2-NEXT: .long 0 @ double -262144
+; VFP2-NEXT: .long 3239051264
+; VFP2-NEXT: .LCPI14_1:
+; VFP2-NEXT: .long 262143 @ 0x3ffff
+ %x = call i19 @llvm.fptosi.sat.i19.f64(double %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i32_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov r2, #1069547520
+; SOFT-NEXT: ldr r3, .LCPI15_0
+; SOFT-NEXT: orr r2, r2, #-1073741824
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #31457280
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: moveq r6, #-2147483648
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r6, #-2147483648
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI15_0:
+; SOFT-NEXT: .long 1105199103 @ 0x41dfffff
+;
+; VFP2-LABEL: test_signed_i32_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: vldr d2, .LCPI15_0
+; VFP2-NEXT: vmov d0, r0, r1
+; VFP2-NEXT: vldr d3, .LCPI15_1
+; VFP2-NEXT: vcmp.f64 d0, d2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f64 s2, d0
+; VFP2-NEXT: vcmp.f64 d0, d3
+; VFP2-NEXT: vmov r0, s2
+; VFP2-NEXT: movlt r0, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d0, d0
+; VFP2-NEXT: mvngt r0, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI15_0:
+; VFP2-NEXT: .long 0 @ double -2147483648
+; VFP2-NEXT: .long 3252682752
+; VFP2-NEXT: .LCPI15_1:
+; VFP2-NEXT: .long 4290772992 @ double 2147483647
+; VFP2-NEXT: .long 1105199103
+ %x = call i32 @llvm.fptosi.sat.i32.f64(double %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i50_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: mvn r2, #15
+; SOFT-NEXT: mvn r3, #-1124073472
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r3, #-1023410176
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r9, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2lz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r9, #0
+; SOFT-NEXT: mov r7, r1
+; SOFT-NEXT: moveq r6, r9
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #15
+; SOFT-NEXT: mvn r3, #-1124073472
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r3, #-1023410176
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r1, #16646144
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #-16777216
+; SOFT-NEXT: ldr r0, .LCPI16_0
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: moveq r7, r1
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: movne r7, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: mov r1, r7
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI16_0:
+; SOFT-NEXT: .long 131071 @ 0x1ffff
+;
+; VFP2-LABEL: test_signed_i50_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, r5, r11, lr}
+; VFP2-NEXT: push {r4, r5, r11, lr}
+; VFP2-NEXT: mov r4, r1
+; VFP2-NEXT: mov r5, r0
+; VFP2-NEXT: bl __aeabi_d2lz
+; VFP2-NEXT: vldr d0, .LCPI16_0
+; VFP2-NEXT: vmov d2, r5, r4
+; VFP2-NEXT: vldr d1, .LCPI16_1
+; VFP2-NEXT: mov r2, #16646144
+; VFP2-NEXT: vcmp.f64 d2, d0
+; VFP2-NEXT: orr r2, r2, #-16777216
+; VFP2-NEXT: ldr r3, .LCPI16_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d2, d1
+; VFP2-NEXT: movlt r1, r2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d2, d2
+; VFP2-NEXT: movgt r1, r3
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d2, d0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d2, d1
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d2, d2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r4, r5, r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI16_0:
+; VFP2-NEXT: .long 0 @ double -562949953421312
+; VFP2-NEXT: .long 3271557120
+; VFP2-NEXT: .LCPI16_1:
+; VFP2-NEXT: .long 4294967280 @ double 562949953421311
+; VFP2-NEXT: .long 1124073471
+; VFP2-NEXT: .LCPI16_2:
+; VFP2-NEXT: .long 131071 @ 0x1ffff
+ %x = call i50 @llvm.fptosi.sat.i50.f64(double %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i64_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: .pad #4
+; SOFT-NEXT: sub sp, sp, #4
+; SOFT-NEXT: ldr r8, .LCPI17_0
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r3, r8
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r9, #65011712
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r9, r9, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r3, r9
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r11, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_d2lz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r11, #0
+; SOFT-NEXT: mov r7, r1
+; SOFT-NEXT: moveq r6, r11
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r3, r8
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r8, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: mov r3, r9
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: moveq r7, #-2147483648
+; SOFT-NEXT: cmp r8, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r7, #-2147483648
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: mov r1, r7
+; SOFT-NEXT: add sp, sp, #4
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI17_0:
+; SOFT-NEXT: .long 1138753535 @ 0x43dfffff
+;
+; VFP2-LABEL: test_signed_i64_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, r5, r11, lr}
+; VFP2-NEXT: push {r4, r5, r11, lr}
+; VFP2-NEXT: mov r4, r1
+; VFP2-NEXT: mov r5, r0
+; VFP2-NEXT: bl __aeabi_d2lz
+; VFP2-NEXT: vldr d0, .LCPI17_0
+; VFP2-NEXT: vmov d1, r5, r4
+; VFP2-NEXT: vldr d2, .LCPI17_1
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r1, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r1, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: pop {r4, r5, r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI17_0:
+; VFP2-NEXT: .long 0 @ double -9.2233720368547758E+18
+; VFP2-NEXT: .long 3286237184
+; VFP2-NEXT: .LCPI17_1:
+; VFP2-NEXT: .long 4294967295 @ double 9.2233720368547748E+18
+; VFP2-NEXT: .long 1138753535
+ %x = call i64 @llvm.fptosi.sat.i64.f64(double %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i100_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: .pad #4
+; SOFT-NEXT: sub sp, sp, #4
+; SOFT-NEXT: ldr r3, .LCPI18_0
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #102760448
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r11, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __fixdfti
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r11, #0
+; SOFT-NEXT: mov r7, r1
+; SOFT-NEXT: mov r8, r2
+; SOFT-NEXT: mov r9, r3
+; SOFT-NEXT: moveq r6, r11
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: ldr r11, .LCPI18_0
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r3, r11
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #102760448
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r7, r0
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r7, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r3, r11
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #102760448
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r8, r0
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r8, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r3, r11
+; SOFT-NEXT: movne r8, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #102760448
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mvneq r9, #7
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: movne r9, #7
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: movne r9, #0
+; SOFT-NEXT: mov r1, r7
+; SOFT-NEXT: mov r2, r8
+; SOFT-NEXT: mov r3, r9
+; SOFT-NEXT: add sp, sp, #4
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI18_0:
+; SOFT-NEXT: .long 1176502271 @ 0x461fffff
+;
+; VFP2-LABEL: test_signed_i100_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, r5, r11, lr}
+; VFP2-NEXT: push {r4, r5, r11, lr}
+; VFP2-NEXT: mov r4, r1
+; VFP2-NEXT: mov r5, r0
+; VFP2-NEXT: bl __fixdfti
+; VFP2-NEXT: vldr d0, .LCPI18_0
+; VFP2-NEXT: vmov d1, r5, r4
+; VFP2-NEXT: vldr d2, .LCPI18_1
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: mvnlt r3, #7
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: movgt r3, #7
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r3, #0
+; VFP2-NEXT: pop {r4, r5, r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI18_0:
+; VFP2-NEXT: .long 0 @ double -6.338253001141147E+29
+; VFP2-NEXT: .long 3323985920
+; VFP2-NEXT: .LCPI18_1:
+; VFP2-NEXT: .long 4294967295 @ double 6.3382530011411463E+29
+; VFP2-NEXT: .long 1176502271
+ %x = call i100 @llvm.fptosi.sat.i100.f64(double %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f64(double %f) nounwind {
+; SOFT-LABEL: test_signed_i128_f64:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: .pad #4
+; SOFT-NEXT: sub sp, sp, #4
+; SOFT-NEXT: ldr r3, .LCPI19_0
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r4, r1
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #132120576
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: mov r11, r0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __fixdfti
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r11, #0
+; SOFT-NEXT: mov r7, r1
+; SOFT-NEXT: mov r8, r2
+; SOFT-NEXT: mov r9, r3
+; SOFT-NEXT: moveq r6, r11
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: ldr r11, .LCPI19_0
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r3, r11
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #132120576
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r7, r0
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r7, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r3, r11
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #132120576
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: moveq r8, r0
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r8, #0
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvn r2, #0
+; SOFT-NEXT: mov r3, r11
+; SOFT-NEXT: movne r8, #0
+; SOFT-NEXT: bl __aeabi_dcmpgt
+; SOFT-NEXT: mov r3, #132120576
+; SOFT-NEXT: mov r10, r0
+; SOFT-NEXT: orr r3, r3, #-1073741824
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, #0
+; SOFT-NEXT: bl __aeabi_dcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: moveq r9, #-2147483648
+; SOFT-NEXT: cmp r10, #0
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mov r2, r5
+; SOFT-NEXT: mov r3, r4
+; SOFT-NEXT: mvnne r9, #-2147483648
+; SOFT-NEXT: bl __aeabi_dcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: movne r9, #0
+; SOFT-NEXT: mov r1, r7
+; SOFT-NEXT: mov r2, r8
+; SOFT-NEXT: mov r3, r9
+; SOFT-NEXT: add sp, sp, #4
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r10, r11, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI19_0:
+; SOFT-NEXT: .long 1205862399 @ 0x47dfffff
+;
+; VFP2-LABEL: test_signed_i128_f64:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, r5, r11, lr}
+; VFP2-NEXT: push {r4, r5, r11, lr}
+; VFP2-NEXT: mov r4, r1
+; VFP2-NEXT: mov r5, r0
+; VFP2-NEXT: bl __fixdfti
+; VFP2-NEXT: vldr d0, .LCPI19_0
+; VFP2-NEXT: vmov d1, r5, r4
+; VFP2-NEXT: vldr d2, .LCPI19_1
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d0
+; VFP2-NEXT: movvs r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d2
+; VFP2-NEXT: movlt r3, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f64 d1, d1
+; VFP2-NEXT: mvngt r3, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r3, #0
+; VFP2-NEXT: pop {r4, r5, r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 3
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI19_0:
+; VFP2-NEXT: .long 0 @ double -1.7014118346046923E+38
+; VFP2-NEXT: .long 3353346048
+; VFP2-NEXT: .LCPI19_1:
+; VFP2-NEXT: .long 4294967295 @ double 1.7014118346046921E+38
+; VFP2-NEXT: .long 1205862399
+ %x = call i128 @llvm.fptosi.sat.i128.f64(double %f)
+ ret i128 %x
+}
+
+;
+; 16-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f16 (half)
+declare i8 @llvm.fptosi.sat.i8.f16 (half)
+declare i13 @llvm.fptosi.sat.i13.f16 (half)
+declare i16 @llvm.fptosi.sat.i16.f16 (half)
+declare i19 @llvm.fptosi.sat.i19.f16 (half)
+declare i32 @llvm.fptosi.sat.i32.f16 (half)
+declare i50 @llvm.fptosi.sat.i50.f16 (half)
+declare i64 @llvm.fptosi.sat.i64.f16 (half)
+declare i100 @llvm.fptosi.sat.i100.f16(half)
+declare i128 @llvm.fptosi.sat.i128.f16(half)
+
+define i1 @test_signed_i1_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i1_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #1065353216
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: orr r1, r1, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #0
+; SOFT-NEXT: mvneq r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i1_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI20_0
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, #0
+; VFP2-NEXT: vmov r0, s4
+; VFP2-NEXT: mvnlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: movgt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI20_0:
+; VFP2-NEXT: .long 0xbf800000 @ float -1
+ %x = call i1 @llvm.fptosi.sat.i1.f16(half %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i8_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-1023410176
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r1, #16646144
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: orr r1, r1, #1107296256
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvneq r6, #127
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: movne r6, #127
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i8_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI21_0
+; VFP2-NEXT: vldr s6, .LCPI21_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r0, s4
+; VFP2-NEXT: mvnlt r0, #127
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: movgt r0, #127
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI21_0:
+; VFP2-NEXT: .long 0xc3000000 @ float -128
+; VFP2-NEXT: .LCPI21_1:
+; VFP2-NEXT: .long 0x42fe0000 @ float 127
+ %x = call i8 @llvm.fptosi.sat.i8.f16(half %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i13_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #92274688
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: orr r1, r1, #-1073741824
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: ldr r0, .LCPI22_0
+; SOFT-NEXT: ldr r1, .LCPI22_1
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #3840
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI22_0:
+; SOFT-NEXT: .long 4294963200 @ 0xfffff000
+; SOFT-NEXT: .LCPI22_1:
+; SOFT-NEXT: .long 1166012416 @ 0x457ff000
+;
+; VFP2-LABEL: test_signed_i13_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI22_0
+; VFP2-NEXT: vldr s6, .LCPI22_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: ldr r0, .LCPI22_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r1, s4
+; VFP2-NEXT: movlt r1, r0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: mov r0, #255
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: orr r0, r0, #3840
+; VFP2-NEXT: movle r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI22_0:
+; VFP2-NEXT: .long 0xc5800000 @ float -4096
+; VFP2-NEXT: .LCPI22_1:
+; VFP2-NEXT: .long 0x457ff000 @ float 4095
+; VFP2-NEXT: .LCPI22_2:
+; VFP2-NEXT: .long 4294963200 @ 0xfffff000
+ %x = call i13 @llvm.fptosi.sat.i13.f16(half %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i16_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-956301312
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: ldr r0, .LCPI23_0
+; SOFT-NEXT: ldr r1, .LCPI23_1
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #32512
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI23_0:
+; SOFT-NEXT: .long 4294934528 @ 0xffff8000
+; SOFT-NEXT: .LCPI23_1:
+; SOFT-NEXT: .long 1191181824 @ 0x46fffe00
+;
+; VFP2-LABEL: test_signed_i16_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI23_0
+; VFP2-NEXT: vldr s6, .LCPI23_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: ldr r0, .LCPI23_2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r1, s4
+; VFP2-NEXT: movlt r1, r0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: mov r0, #255
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: orr r0, r0, #32512
+; VFP2-NEXT: movle r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI23_0:
+; VFP2-NEXT: .long 0xc7000000 @ float -32768
+; VFP2-NEXT: .LCPI23_1:
+; VFP2-NEXT: .long 0x46fffe00 @ float 32767
+; VFP2-NEXT: .LCPI23_2:
+; VFP2-NEXT: .long 4294934528 @ 0xffff8000
+ %x = call i16 @llvm.fptosi.sat.i16.f16(half %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i19_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #142606336
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: orr r1, r1, #-1073741824
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: mov r0, #66846720
+; SOFT-NEXT: orr r0, r0, #-67108864
+; SOFT-NEXT: ldr r1, .LCPI24_0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: ldr r1, .LCPI24_1
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI24_0:
+; SOFT-NEXT: .long 1216348096 @ 0x487fffc0
+; SOFT-NEXT: .LCPI24_1:
+; SOFT-NEXT: .long 262143 @ 0x3ffff
+;
+; VFP2-LABEL: test_signed_i19_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s6, .LCPI24_2
+; VFP2-NEXT: vldr s2, .LCPI24_0
+; VFP2-NEXT: mov r0, #66846720
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: orr r0, r0, #-67108864
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: ldr r1, .LCPI24_1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmov r2, s4
+; VFP2-NEXT: movge r0, r2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: movgt r0, r1
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI24_0:
+; VFP2-NEXT: .long 0x487fffc0 @ float 262143
+; VFP2-NEXT: .LCPI24_1:
+; VFP2-NEXT: .long 262143 @ 0x3ffff
+; VFP2-NEXT: .LCPI24_2:
+; VFP2-NEXT: .long 0xc8800000 @ float -262144
+ %x = call i19 @llvm.fptosi.sat.i19.f16(half %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i32_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, lr}
+; SOFT-NEXT: push {r4, r5, r6, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-822083584
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2iz
+; SOFT-NEXT: mov r6, r0
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1325400064
+; SOFT-NEXT: moveq r6, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r6, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r0, r6
+; SOFT-NEXT: pop {r4, r5, r6, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i32_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vmov s0, r0
+; VFP2-NEXT: vldr s2, .LCPI25_0
+; VFP2-NEXT: vldr s6, .LCPI25_1
+; VFP2-NEXT: vcmp.f32 s0, s2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcvt.s32.f32 s4, s0
+; VFP2-NEXT: vcmp.f32 s0, s6
+; VFP2-NEXT: vmov r0, s4
+; VFP2-NEXT: movlt r0, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s0, s0
+; VFP2-NEXT: mvngt r0, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI25_0:
+; VFP2-NEXT: .long 0xcf000000 @ float -2.14748365E+9
+; VFP2-NEXT: .LCPI25_1:
+; VFP2-NEXT: .long 0x4effffff @ float 2.14748352E+9
+ %x = call i32 @llvm.fptosi.sat.i32.f16(half %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i50_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-671088640
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2lz
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1476395008
+; SOFT-NEXT: moveq r5, r7
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-671088640
+; SOFT-NEXT: movne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r1, #16646144
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: orr r1, r1, #-16777216
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: moveq r6, r1
+; SOFT-NEXT: mvn r1, #-1476395008
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: ldr r1, .LCPI26_0
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: movne r6, r1
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov pc, lr
+; SOFT-NEXT: .p2align 2
+; SOFT-NEXT: @ %bb.1:
+; SOFT-NEXT: .LCPI26_0:
+; SOFT-NEXT: .long 131071 @ 0x1ffff
+;
+; VFP2-LABEL: test_signed_i50_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r11, lr}
+; VFP2-NEXT: push {r11, lr}
+; VFP2-NEXT: .vsave {d8, d9}
+; VFP2-NEXT: vpush {d8, d9}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: vldr s16, .LCPI26_0
+; VFP2-NEXT: vmov s18, r0
+; VFP2-NEXT: bl __aeabi_f2lz
+; VFP2-NEXT: vcmp.f32 s18, s16
+; VFP2-NEXT: mov r2, #16646144
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: orr r2, r2, #-16777216
+; VFP2-NEXT: vldr s0, .LCPI26_1
+; VFP2-NEXT: ldr r3, .LCPI26_2
+; VFP2-NEXT: vcmp.f32 s18, s0
+; VFP2-NEXT: movlt r1, r2
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s18
+; VFP2-NEXT: movgt r1, r3
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s16
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s0
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s18, s18
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vpop {d8, d9}
+; VFP2-NEXT: pop {r11, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI26_0:
+; VFP2-NEXT: .long 0xd8000000 @ float -5.62949953E+14
+; VFP2-NEXT: .LCPI26_1:
+; VFP2-NEXT: .long 0x57ffffff @ float 5.6294992E+14
+; VFP2-NEXT: .LCPI26_2:
+; VFP2-NEXT: .long 131071 @ 0x1ffff
+ %x = call i50 @llvm.fptosi.sat.i50.f16(half %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i64_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-553648128
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_f2lz
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: cmp r7, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1593835520
+; SOFT-NEXT: moveq r5, r7
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-553648128
+; SOFT-NEXT: movne r5, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1593835520
+; SOFT-NEXT: moveq r6, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r6, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r5
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: pop {r4, r5, r6, r7, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i64_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, lr}
+; VFP2-NEXT: push {r4, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: mov r4, r0
+; VFP2-NEXT: bl __aeabi_f2lz
+; VFP2-NEXT: vldr s0, .LCPI27_0
+; VFP2-NEXT: vmov s2, r4
+; VFP2-NEXT: vldr s4, .LCPI27_1
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r1, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r1, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: pop {r4, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI27_0:
+; VFP2-NEXT: .long 0xdf000000 @ float -9.22337203E+18
+; VFP2-NEXT: .LCPI27_1:
+; VFP2-NEXT: .long 0x5effffff @ float 9.22337149E+18
+ %x = call i64 @llvm.fptosi.sat.i64.f16(half %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i100_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __fixsfti
+; SOFT-NEXT: mov r9, r0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: mov r7, r2
+; SOFT-NEXT: mov r8, r3
+; SOFT-NEXT: moveq r9, r5
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r9, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: movne r9, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: moveq r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-251658240
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-1895825408
+; SOFT-NEXT: mvneq r8, #7
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: movne r8, #7
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r9
+; SOFT-NEXT: movne r8, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: mov r2, r7
+; SOFT-NEXT: mov r3, r8
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i100_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, lr}
+; VFP2-NEXT: push {r4, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: mov r4, r0
+; VFP2-NEXT: bl __fixsfti
+; VFP2-NEXT: vldr s0, .LCPI28_0
+; VFP2-NEXT: vmov s2, r4
+; VFP2-NEXT: vldr s4, .LCPI28_1
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: mvnlt r3, #7
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: movgt r3, #7
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r3, #0
+; VFP2-NEXT: pop {r4, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI28_0:
+; VFP2-NEXT: .long 0xf1000000 @ float -6.338253E+29
+; VFP2-NEXT: .LCPI28_1:
+; VFP2-NEXT: .long 0x70ffffff @ float 6.33825262E+29
+ %x = call i100 @llvm.fptosi.sat.i100.f16(half %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f16(half %f) nounwind {
+; SOFT-LABEL: test_signed_i128_f16:
+; SOFT: @ %bb.0:
+; SOFT-NEXT: .save {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: push {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: mov r1, #255
+; SOFT-NEXT: orr r1, r1, #65280
+; SOFT-NEXT: and r0, r0, r1
+; SOFT-NEXT: bl __aeabi_h2f
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: mov r4, r0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: mov r5, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __fixsfti
+; SOFT-NEXT: mov r9, r0
+; SOFT-NEXT: mov r6, r1
+; SOFT-NEXT: cmp r5, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: mov r7, r2
+; SOFT-NEXT: mov r8, r3
+; SOFT-NEXT: moveq r9, r5
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r9, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: movne r9, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: moveq r6, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: movne r6, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: moveq r7, r0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, #-16777216
+; SOFT-NEXT: movne r7, #0
+; SOFT-NEXT: bl __aeabi_fcmpge
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mvn r1, #-2130706432
+; SOFT-NEXT: moveq r8, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpgt
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r4
+; SOFT-NEXT: mov r1, r4
+; SOFT-NEXT: mvnne r8, #-2147483648
+; SOFT-NEXT: bl __aeabi_fcmpun
+; SOFT-NEXT: cmp r0, #0
+; SOFT-NEXT: mov r0, r9
+; SOFT-NEXT: movne r8, #0
+; SOFT-NEXT: mov r1, r6
+; SOFT-NEXT: mov r2, r7
+; SOFT-NEXT: mov r3, r8
+; SOFT-NEXT: pop {r4, r5, r6, r7, r8, r9, r11, lr}
+; SOFT-NEXT: mov pc, lr
+;
+; VFP2-LABEL: test_signed_i128_f16:
+; VFP2: @ %bb.0:
+; VFP2-NEXT: .save {r4, lr}
+; VFP2-NEXT: push {r4, lr}
+; VFP2-NEXT: bl __aeabi_h2f
+; VFP2-NEXT: mov r4, r0
+; VFP2-NEXT: bl __fixsfti
+; VFP2-NEXT: vldr s0, .LCPI29_0
+; VFP2-NEXT: vmov s2, r4
+; VFP2-NEXT: vldr s4, .LCPI29_1
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r0, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r1, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s0
+; VFP2-NEXT: movvs r2, #0
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s4
+; VFP2-NEXT: movlt r3, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: vcmp.f32 s2, s2
+; VFP2-NEXT: mvngt r3, #-2147483648
+; VFP2-NEXT: vmrs APSR_nzcv, fpscr
+; VFP2-NEXT: movvs r3, #0
+; VFP2-NEXT: pop {r4, lr}
+; VFP2-NEXT: mov pc, lr
+; VFP2-NEXT: .p2align 2
+; VFP2-NEXT: @ %bb.1:
+; VFP2-NEXT: .LCPI29_0:
+; VFP2-NEXT: .long 0xff000000 @ float -1.70141183E+38
+; VFP2-NEXT: .LCPI29_1:
+; VFP2-NEXT: .long 0x7effffff @ float 1.70141173E+38
+ %x = call i128 @llvm.fptosi.sat.i128.f16(half %f)
+ ret i128 %x
+}
diff --git a/llvm/test/CodeGen/X86/fptosi-sat-scalar.ll b/llvm/test/CodeGen/X86/fptosi-sat-scalar.ll
new file mode 100644
index 000000000000..8c6ae0a389c3
--- /dev/null
+++ b/llvm/test/CodeGen/X86/fptosi-sat-scalar.ll
@@ -0,0 +1,4711 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=i686-linux | FileCheck %s --check-prefixes=X86,X86-X87
+; RUN: llc < %s -mtriple=i686-linux -mattr=+sse2 | FileCheck %s --check-prefixes=X86,X86-SSE
+; RUN: llc < %s -mtriple=x86_64-linux | FileCheck %s --check-prefix=X64
+
+;
+; 32-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f32 (float)
+declare i8 @llvm.fptosi.sat.i8.f32 (float)
+declare i13 @llvm.fptosi.sat.i13.f32 (float)
+declare i16 @llvm.fptosi.sat.i16.f32 (float)
+declare i19 @llvm.fptosi.sat.i19.f32 (float)
+declare i32 @llvm.fptosi.sat.i32.f32 (float)
+declare i50 @llvm.fptosi.sat.i50.f32 (float)
+declare i64 @llvm.fptosi.sat.i64.f32 (float)
+declare i100 @llvm.fptosi.sat.i100.f32(float)
+declare i128 @llvm.fptosi.sat.i128.f32(float)
+
+define i1 @test_signed_i1_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i1_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fchs
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %dl
+; X86-X87-NEXT: jb .LBB0_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB0_2:
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: ja .LBB0_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ebx
+; X86-X87-NEXT: .LBB0_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB0_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %ebx, %ecx
+; X86-X87-NEXT: .LBB0_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i1_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovael %ecx, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i1_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovael %ecx, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmoval %ecx, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovpl %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptosi.sat.i1.f32(float %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i8_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-128, %dl
+; X86-X87-NEXT: jb .LBB1_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB1_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $127, %cl
+; X86-X87-NEXT: ja .LBB1_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB1_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jnp .LBB1_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB1_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i8_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $128, %ecx
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $127, %edx
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i8_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $128, %ecx
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $127, %edx
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptosi.sat.i8.f32(float %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i13_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-4096, %cx # imm = 0xF000
+; X86-X87-NEXT: jb .LBB2_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB2_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-X87-NEXT: ja .LBB2_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB2_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB2_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB2_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i13_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $61440, %ecx # imm = 0xF000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i13_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $61440, %ecx # imm = 0xF000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $4095, %edx # imm = 0xFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptosi.sat.i13.f32(float %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i16_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-32768, %cx # imm = 0x8000
+; X86-X87-NEXT: jb .LBB3_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB3_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-X87-NEXT: ja .LBB3_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB3_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB3_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB3_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i16_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $32768, %ecx # imm = 0x8000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i16_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $32768, %ecx # imm = 0x8000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptosi.sat.i16.f32(float %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i19_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-X87-NEXT: jb .LBB4_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB4_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB4_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB4_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB4_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB4_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i19_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i19_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptosi.sat.i19.f32(float %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i32_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB5_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB5_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB5_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB5_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB5_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB5_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i32_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i32_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptosi.sat.i32.f32(float %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i50_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB6_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB6_2:
+; X86-X87-NEXT: movl $-131072, %edi # imm = 0xFFFE0000
+; X86-X87-NEXT: jb .LBB6_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB6_4:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $131071, %esi # imm = 0x1FFFF
+; X86-X87-NEXT: ja .LBB6_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB6_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB6_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB6_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB6_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB6_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i50_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-131072, %eax # imm = 0xFFFE0000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $131071, %edx # imm = 0x1FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i50_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $-562949953421312, %rcx # imm = 0xFFFE000000000000
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $562949953421311, %rdx # imm = 0x1FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rdx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpq %rdx, %rax
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptosi.sat.i50.f32(float %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i64_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB7_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB7_2:
+; X86-X87-NEXT: movl $-2147483648, %edi # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB7_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB7_4:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %esi # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB7_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB7_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB7_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB7_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB7_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB7_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i64_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i64_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $9223372036854775807, %rdx # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rdx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpq %rdx, %rax
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptosi.sat.i64.f32(float %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i100_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixsfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-8, %ebx
+; X86-X87-NEXT: jb .LBB8_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: .LBB8_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: jb .LBB8_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebp
+; X86-X87-NEXT: .LBB8_4:
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB8_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB8_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB8_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB8_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $7, %edi
+; X86-X87-NEXT: ja .LBB8_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: .LBB8_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB8_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB8_12:
+; X86-X87-NEXT: movl %ebx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: andl $15, %edx
+; X86-X87-NEXT: movb %dl, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_signed_i100_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $28, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixsfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: xorl %ebp, %ebp
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-8, %ebx
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB8_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB8_2:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmoval %eax, %edi
+; X86-SSE-NEXT: cmoval %eax, %edx
+; X86-SSE-NEXT: cmoval %eax, %ecx
+; X86-SSE-NEXT: movl $7, %eax
+; X86-SSE-NEXT: cmovbel %ebx, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ebp, %eax
+; X86-SSE-NEXT: cmovpl %ebp, %ecx
+; X86-SSE-NEXT: cmovpl %ebp, %edx
+; X86-SSE-NEXT: cmovpl %ebp, %edi
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %ecx, (%esi)
+; X86-SSE-NEXT: andl $15, %eax
+; X86-SSE-NEXT: movb %al, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $28, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i100_f32:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixsfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm0 # 4-byte Reload
+; X64-NEXT: # xmm0 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-34359738368, %rsi # imm = 0xFFFFFFF800000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $34359738367, %rsi # imm = 0x7FFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptosi.sat.i100.f32(float %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f32(float %f) nounwind {
+; X86-X87-LABEL: test_signed_i128_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixsfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: movl $0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jae .LBB9_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jae .LBB9_3
+; X86-X87-NEXT: .LBB9_4:
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB9_6
+; X86-X87-NEXT: .LBB9_5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB9_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %eax # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB9_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB9_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB9_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB9_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB9_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, %edx
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB9_12:
+; X86-X87-NEXT: movl %ebx, 12(%ecx)
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %eax, 4(%ecx)
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+; X86-X87-NEXT: .LBB9_1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB9_4
+; X86-X87-NEXT: .LBB9_3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jae .LBB9_5
+; X86-X87-NEXT: jmp .LBB9_6
+;
+; X86-SSE-LABEL: test_signed_i128_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $28, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixsfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: cmovbl %ecx, %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: cmovbl %ecx, %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: cmovbl %ecx, %edi
+; X86-SSE-NEXT: movl $-2147483648, %ebx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %ebp # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ebx, %ebp
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: cmovpl %ecx, %edi
+; X86-SSE-NEXT: cmovpl %ecx, %ebp
+; X86-SSE-NEXT: movl %ebp, 12(%esi)
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %eax, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $28, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i128_f32:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixsfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm0 # 4-byte Reload
+; X64-NEXT: # xmm0 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rsi # imm = 0x8000000000000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $9223372036854775807, %rsi # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptosi.sat.i128.f32(float %f)
+ ret i128 %x
+}
+
+;
+; 64-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f64 (double)
+declare i8 @llvm.fptosi.sat.i8.f64 (double)
+declare i13 @llvm.fptosi.sat.i13.f64 (double)
+declare i16 @llvm.fptosi.sat.i16.f64 (double)
+declare i19 @llvm.fptosi.sat.i19.f64 (double)
+declare i32 @llvm.fptosi.sat.i32.f64 (double)
+declare i50 @llvm.fptosi.sat.i50.f64 (double)
+declare i64 @llvm.fptosi.sat.i64.f64 (double)
+declare i100 @llvm.fptosi.sat.i100.f64(double)
+declare i128 @llvm.fptosi.sat.i128.f64(double)
+
+define i1 @test_signed_i1_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i1_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fchs
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %dl
+; X86-X87-NEXT: jb .LBB10_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB10_2:
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: ja .LBB10_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ebx
+; X86-X87-NEXT: .LBB10_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB10_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %ebx, %ecx
+; X86-X87-NEXT: .LBB10_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i1_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovael %ecx, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i1_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovael %ecx, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmoval %ecx, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovpl %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptosi.sat.i1.f64(double %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i8_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-128, %dl
+; X86-X87-NEXT: jb .LBB11_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB11_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $127, %cl
+; X86-X87-NEXT: ja .LBB11_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB11_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jnp .LBB11_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB11_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i8_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $128, %ecx
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $127, %edx
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i8_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $128, %ecx
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $127, %edx
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptosi.sat.i8.f64(double %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i13_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-4096, %cx # imm = 0xF000
+; X86-X87-NEXT: jb .LBB12_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB12_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-X87-NEXT: ja .LBB12_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB12_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB12_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB12_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i13_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $61440, %ecx # imm = 0xF000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i13_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $61440, %ecx # imm = 0xF000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $4095, %edx # imm = 0xFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptosi.sat.i13.f64(double %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i16_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-32768, %cx # imm = 0x8000
+; X86-X87-NEXT: jb .LBB13_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB13_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-X87-NEXT: ja .LBB13_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB13_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB13_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB13_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i16_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $32768, %ecx # imm = 0x8000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i16_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $32768, %ecx # imm = 0x8000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptosi.sat.i16.f64(double %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i19_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-X87-NEXT: jb .LBB14_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB14_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB14_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB14_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB14_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB14_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i19_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i19_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptosi.sat.i19.f64(double %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i32_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB15_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB15_2:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB15_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB15_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB15_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB15_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i32_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i32_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptosi.sat.i32.f64(double %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i50_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB16_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB16_2:
+; X86-X87-NEXT: movl $-131072, %edi # imm = 0xFFFE0000
+; X86-X87-NEXT: jb .LBB16_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB16_4:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $131071, %esi # imm = 0x1FFFF
+; X86-X87-NEXT: ja .LBB16_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB16_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB16_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB16_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB16_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB16_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i50_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-131072, %eax # imm = 0xFFFE0000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $131071, %edx # imm = 0x1FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i50_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %rax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $-562949953421312, %rcx # imm = 0xFFFE000000000000
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $562949953421311, %rdx # imm = 0x1FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rdx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpq %rdx, %rax
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptosi.sat.i50.f64(double %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i64_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB17_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB17_2:
+; X86-X87-NEXT: movl $-2147483648, %edi # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB17_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB17_4:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %esi # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB17_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB17_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB17_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB17_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB17_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB17_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i64_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i64_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %rax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $9223372036854775807, %rdx # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rdx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovnpq %rdx, %rax
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptosi.sat.i64.f64(double %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i100_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fstl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fstl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixdfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-8, %ebx
+; X86-X87-NEXT: jb .LBB18_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: .LBB18_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: jb .LBB18_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebp
+; X86-X87-NEXT: .LBB18_4:
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB18_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB18_6:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fldl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB18_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB18_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $7, %edi
+; X86-X87-NEXT: ja .LBB18_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: .LBB18_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB18_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB18_12:
+; X86-X87-NEXT: movl %ebx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: andl $15, %edx
+; X86-X87-NEXT: movb %dl, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_signed_i100_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $44, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixdfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: xorl %ebp, %ebp
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-8, %ebx
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB18_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB18_2:
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmoval %eax, %edi
+; X86-SSE-NEXT: cmoval %eax, %edx
+; X86-SSE-NEXT: cmoval %eax, %ecx
+; X86-SSE-NEXT: movl $7, %eax
+; X86-SSE-NEXT: cmovbel %ebx, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ebp, %eax
+; X86-SSE-NEXT: cmovpl %ebp, %ecx
+; X86-SSE-NEXT: cmovpl %ebp, %edx
+; X86-SSE-NEXT: cmovpl %ebp, %edi
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %ecx, (%esi)
+; X86-SSE-NEXT: andl $15, %eax
+; X86-SSE-NEXT: movb %al, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $44, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i100_f64:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movsd %xmm0, (%rsp) # 8-byte Spill
+; X64-NEXT: callq __fixdfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: movsd (%rsp), %xmm0 # 8-byte Reload
+; X64-NEXT: # xmm0 = mem[0],zero
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-34359738368, %rsi # imm = 0xFFFFFFF800000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $34359738367, %rsi # imm = 0x7FFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptosi.sat.i100.f64(double %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f64(double %f) nounwind {
+; X86-X87-LABEL: test_signed_i128_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fstl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fstl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixdfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: movl $0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jae .LBB19_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jae .LBB19_3
+; X86-X87-NEXT: .LBB19_4:
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB19_6
+; X86-X87-NEXT: .LBB19_5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB19_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fldl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %eax # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB19_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB19_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB19_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB19_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB19_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, %edx
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB19_12:
+; X86-X87-NEXT: movl %ebx, 12(%ecx)
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %eax, 4(%ecx)
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+; X86-X87-NEXT: .LBB19_1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB19_4
+; X86-X87-NEXT: .LBB19_3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jae .LBB19_5
+; X86-X87-NEXT: jmp .LBB19_6
+;
+; X86-SSE-LABEL: test_signed_i128_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $44, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixdfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: cmovbl %ecx, %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: cmovbl %ecx, %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: cmovbl %ecx, %edi
+; X86-SSE-NEXT: movl $-2147483648, %ebx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %ebp # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ebx, %ebp
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: cmovpl %ecx, %edi
+; X86-SSE-NEXT: cmovpl %ecx, %ebp
+; X86-SSE-NEXT: movl %ebp, 12(%esi)
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %eax, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $44, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i128_f64:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movsd %xmm0, (%rsp) # 8-byte Spill
+; X64-NEXT: callq __fixdfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: movsd (%rsp), %xmm0 # 8-byte Reload
+; X64-NEXT: # xmm0 = mem[0],zero
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rsi # imm = 0x8000000000000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $9223372036854775807, %rsi # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: ucomisd %xmm0, %xmm0
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptosi.sat.i128.f64(double %f)
+ ret i128 %x
+}
+
+;
+; 16-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f16 (half)
+declare i8 @llvm.fptosi.sat.i8.f16 (half)
+declare i13 @llvm.fptosi.sat.i13.f16 (half)
+declare i16 @llvm.fptosi.sat.i16.f16 (half)
+declare i19 @llvm.fptosi.sat.i19.f16 (half)
+declare i32 @llvm.fptosi.sat.i32.f16 (half)
+declare i50 @llvm.fptosi.sat.i50.f16 (half)
+declare i64 @llvm.fptosi.sat.i64.f16 (half)
+declare i100 @llvm.fptosi.sat.i100.f16(half)
+declare i128 @llvm.fptosi.sat.i128.f16(half)
+
+define i1 @test_signed_i1_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i1_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: subl $24, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fchs
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %dl
+; X86-X87-NEXT: jb .LBB20_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB20_2:
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: ja .LBB20_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ebx
+; X86-X87-NEXT: .LBB20_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB20_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %ebx, %ecx
+; X86-X87-NEXT: .LBB20_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $24, %esp
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i1_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovael %ecx, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i1_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovael %ecx, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmoval %ecx, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovpl %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptosi.sat.i1.f16(half %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i8_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-128, %dl
+; X86-X87-NEXT: jb .LBB21_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB21_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $127, %cl
+; X86-X87-NEXT: ja .LBB21_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB21_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jnp .LBB21_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB21_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i8_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $128, %ecx
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $127, %edx
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i8_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $128, %ecx
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $127, %edx
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptosi.sat.i8.f16(half %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i13_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-4096, %cx # imm = 0xF000
+; X86-X87-NEXT: jb .LBB22_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB22_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-X87-NEXT: ja .LBB22_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB22_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB22_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB22_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i13_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $61440, %ecx # imm = 0xF000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i13_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $61440, %ecx # imm = 0xF000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $4095, %edx # imm = 0xFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptosi.sat.i13.f16(half %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i16_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-32768, %cx # imm = 0x8000
+; X86-X87-NEXT: jb .LBB23_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB23_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-X87-NEXT: ja .LBB23_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB23_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB23_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB23_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i16_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $32768, %ecx # imm = 0x8000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i16_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $32768, %ecx # imm = 0x8000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptosi.sat.i16.f16(half %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i19_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-X87-NEXT: jb .LBB24_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB24_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB24_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB24_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB24_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB24_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i19_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i19_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptosi.sat.i19.f16(half %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i32_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB25_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB25_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB25_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB25_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB25_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB25_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i32_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i32_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptosi.sat.i32.f16(half %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i50_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB26_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB26_2:
+; X86-X87-NEXT: movl $-131072, %edi # imm = 0xFFFE0000
+; X86-X87-NEXT: jb .LBB26_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB26_4:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $131071, %esi # imm = 0x1FFFF
+; X86-X87-NEXT: ja .LBB26_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB26_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB26_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB26_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB26_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB26_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i50_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $24, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-131072, %eax # imm = 0xFFFE0000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $131071, %edx # imm = 0x1FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $24, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i50_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $-562949953421312, %rcx # imm = 0xFFFE000000000000
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $562949953421311, %rdx # imm = 0x1FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rdx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpq %rdx, %rax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptosi.sat.i50.f16(half %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i64_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB27_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB27_2:
+; X86-X87-NEXT: movl $-2147483648, %edi # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB27_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB27_4:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %esi # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB27_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB27_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB27_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB27_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB27_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB27_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i64_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $24, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $24, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i64_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $9223372036854775807, %rdx # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rdx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovnpq %rdx, %rax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptosi.sat.i64.f16(half %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i100_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixsfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-8, %ebx
+; X86-X87-NEXT: jb .LBB28_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: .LBB28_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: jb .LBB28_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebp
+; X86-X87-NEXT: .LBB28_4:
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB28_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB28_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB28_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB28_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $7, %edi
+; X86-X87-NEXT: ja .LBB28_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: .LBB28_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB28_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB28_12:
+; X86-X87-NEXT: movl %ebx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: andl $15, %edx
+; X86-X87-NEXT: movb %dl, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_signed_i100_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $44, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: calll __fixsfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: movss {{[-0-9]+}}(%e{{[sb]}}p), %xmm0 # 4-byte Reload
+; X86-SSE-NEXT: # xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: xorl %ebp, %ebp
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-8, %ebx
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB28_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB28_2:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmoval %eax, %edi
+; X86-SSE-NEXT: cmoval %eax, %edx
+; X86-SSE-NEXT: cmoval %eax, %ecx
+; X86-SSE-NEXT: movl $7, %eax
+; X86-SSE-NEXT: cmovbel %ebx, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ebp, %eax
+; X86-SSE-NEXT: cmovpl %ebp, %ecx
+; X86-SSE-NEXT: cmovpl %ebp, %edx
+; X86-SSE-NEXT: cmovpl %ebp, %edi
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %ecx, (%esi)
+; X86-SSE-NEXT: andl $15, %eax
+; X86-SSE-NEXT: movb %al, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $44, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i100_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixsfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm0 # 4-byte Reload
+; X64-NEXT: # xmm0 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-34359738368, %rsi # imm = 0xFFFFFFF800000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $34359738367, %rsi # imm = 0x7FFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptosi.sat.i100.f16(half %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f16(half %f) nounwind {
+; X86-X87-LABEL: test_signed_i128_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixsfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: movl $0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jae .LBB29_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jae .LBB29_3
+; X86-X87-NEXT: .LBB29_4:
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB29_6
+; X86-X87-NEXT: .LBB29_5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB29_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %eax # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB29_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB29_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB29_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB29_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB29_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, %edx
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB29_12:
+; X86-X87-NEXT: movl %ebx, 12(%ecx)
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %eax, 4(%ecx)
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+; X86-X87-NEXT: .LBB29_1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB29_4
+; X86-X87-NEXT: .LBB29_3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jae .LBB29_5
+; X86-X87-NEXT: jmp .LBB29_6
+;
+; X86-SSE-LABEL: test_signed_i128_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $44, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: calll __fixsfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: movss {{[-0-9]+}}(%e{{[sb]}}p), %xmm0 # 4-byte Reload
+; X86-SSE-NEXT: # xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: cmovbl %ecx, %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: cmovbl %ecx, %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: cmovbl %ecx, %edi
+; X86-SSE-NEXT: movl $-2147483648, %ebx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $2147483647, %ebp # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ebx, %ebp
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm0
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: cmovpl %ecx, %edi
+; X86-SSE-NEXT: cmovpl %ecx, %ebp
+; X86-SSE-NEXT: movl %ebp, 12(%esi)
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %eax, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $44, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i128_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixsfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm0 # 4-byte Reload
+; X64-NEXT: # xmm0 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rsi # imm = 0x8000000000000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $9223372036854775807, %rsi # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: ucomiss %xmm0, %xmm0
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptosi.sat.i128.f16(half %f)
+ ret i128 %x
+}
+
+;
+; 80-bit float to signed integer
+;
+
+declare i1 @llvm.fptosi.sat.i1.f80 (x86_fp80)
+declare i8 @llvm.fptosi.sat.i8.f80 (x86_fp80)
+declare i13 @llvm.fptosi.sat.i13.f80 (x86_fp80)
+declare i16 @llvm.fptosi.sat.i16.f80 (x86_fp80)
+declare i19 @llvm.fptosi.sat.i19.f80 (x86_fp80)
+declare i32 @llvm.fptosi.sat.i32.f80 (x86_fp80)
+declare i50 @llvm.fptosi.sat.i50.f80 (x86_fp80)
+declare i64 @llvm.fptosi.sat.i64.f80 (x86_fp80)
+declare i100 @llvm.fptosi.sat.i100.f80(x86_fp80)
+declare i128 @llvm.fptosi.sat.i128.f80(x86_fp80)
+
+define i1 @test_signed_i1_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i1_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fchs
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %dl
+; X86-X87-NEXT: jb .LBB30_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB30_2:
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: ja .LBB30_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ebx
+; X86-X87-NEXT: .LBB30_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB30_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %ebx, %ecx
+; X86-X87-NEXT: .LBB30_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i1_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fists {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzbl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: fld1
+; X86-SSE-NEXT: fchs
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovael %ecx, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i1_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fists -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzbl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: fld1
+; X64-NEXT: fchs
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovael %ecx, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmoval %ecx, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovpl %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptosi.sat.i1.f80(x86_fp80 %f)
+ ret i1 %x
+}
+
+define i8 @test_signed_i8_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i8_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-128, %dl
+; X86-X87-NEXT: jb .LBB31_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %dl
+; X86-X87-NEXT: .LBB31_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $127, %cl
+; X86-X87-NEXT: ja .LBB31_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB31_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jnp .LBB31_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB31_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i8_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fists {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzbl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $128, %ecx
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $127, %edx
+; X86-SSE-NEXT: cmovbel %ecx, %edx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovnpl %edx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i8_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fists -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzbl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $128, %ecx
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $127, %edx
+; X64-NEXT: cmovbel %ecx, %edx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpl %edx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptosi.sat.i8.f80(x86_fp80 %f)
+ ret i8 %x
+}
+
+define i13 @test_signed_i13_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i13_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-4096, %cx # imm = 0xF000
+; X86-X87-NEXT: jb .LBB32_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB32_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $4095, %edx # imm = 0xFFF
+; X86-X87-NEXT: ja .LBB32_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB32_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB32_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB32_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i13_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fists {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movw $-4096, %ax # imm = 0xF000
+; X86-SSE-NEXT: jb .LBB32_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB32_2:
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $4095, %ecx # imm = 0xFFF
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovnpl %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i13_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fists -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movw $-4096, %ax # imm = 0xF000
+; X64-NEXT: jb .LBB32_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: .LBB32_2:
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $4095, %ecx # imm = 0xFFF
+; X64-NEXT: cmovbel %eax, %ecx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpl %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptosi.sat.i13.f80(x86_fp80 %f)
+ ret i13 %x
+}
+
+define i16 @test_signed_i16_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i16_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movw $-32768, %cx # imm = 0x8000
+; X86-X87-NEXT: jb .LBB33_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB33_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $32767, %edx # imm = 0x7FFF
+; X86-X87-NEXT: ja .LBB33_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB33_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB33_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB33_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i16_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fists {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movw $-32768, %ax # imm = 0x8000
+; X86-SSE-NEXT: jb .LBB33_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB33_2:
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $32767, %ecx # imm = 0x7FFF
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovnpl %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i16_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fists -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movw $-32768, %ax # imm = 0x8000
+; X64-NEXT: jb .LBB33_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: .LBB33_2:
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $32767, %ecx # imm = 0x7FFF
+; X64-NEXT: cmovbel %eax, %ecx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpl %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptosi.sat.i16.f80(x86_fp80 %f)
+ ret i16 %x
+}
+
+define i19 @test_signed_i19_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i19_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-262144, %ecx # imm = 0xFFFC0000
+; X86-X87-NEXT: jb .LBB34_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB34_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB34_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB34_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB34_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB34_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i19_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw (%esp)
+; X86-SSE-NEXT: movzwl (%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw (%esp)
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $-262144, %eax # imm = 0xFFFC0000
+; X86-SSE-NEXT: jb .LBB34_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB34_2:
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $262143, %ecx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovnpl %ecx, %eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i19_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fistl -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $-262144, %eax # imm = 0xFFFC0000
+; X64-NEXT: jb .LBB34_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: .LBB34_2:
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $262143, %ecx # imm = 0x3FFFF
+; X64-NEXT: cmovbel %eax, %ecx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpl %ecx, %eax
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptosi.sat.i19.f80(x86_fp80 %f)
+ ret i19 %x
+}
+
+define i32 @test_signed_i32_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i32_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB35_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB35_2:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB35_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edx
+; X86-X87-NEXT: .LBB35_4:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jp .LBB35_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edx, %ecx
+; X86-X87-NEXT: .LBB35_6:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i32_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw (%esp)
+; X86-SSE-NEXT: movzwl (%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw (%esp)
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: jb .LBB35_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB35_2:
+; X86-SSE-NEXT: fldl {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $2147483647, %ecx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovnpl %ecx, %eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i32_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fistl -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $-2147483648, %eax # imm = 0x80000000
+; X64-NEXT: jb .LBB35_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: .LBB35_2:
+; X64-NEXT: fldl {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movl $2147483647, %ecx # imm = 0x7FFFFFFF
+; X64-NEXT: cmovbel %eax, %ecx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpl %ecx, %eax
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptosi.sat.i32.f80(x86_fp80 %f)
+ ret i32 %x
+}
+
+define i50 @test_signed_i50_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i50_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB36_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB36_2:
+; X86-X87-NEXT: movl $-131072, %edi # imm = 0xFFFE0000
+; X86-X87-NEXT: jb .LBB36_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB36_4:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $131071, %esi # imm = 0x1FFFF
+; X86-X87-NEXT: ja .LBB36_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB36_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB36_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB36_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB36_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB36_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i50_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-131072, %eax # imm = 0xFFFE0000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: fldl {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $131071, %edx # imm = 0x1FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i50_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fistpll -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: jb .LBB36_1
+; X64-NEXT: # %bb.2:
+; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rax
+; X64-NEXT: jmp .LBB36_3
+; X64-NEXT: .LBB36_1:
+; X64-NEXT: movabsq $-562949953421312, %rax # imm = 0xFFFE000000000000
+; X64-NEXT: .LBB36_3:
+; X64-NEXT: fldl {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movabsq $562949953421311, %rcx # imm = 0x1FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rax, %rcx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptosi.sat.i50.f80(x86_fp80 %f)
+ ret i50 %x
+}
+
+define i64 @test_signed_i64_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i64_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB37_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB37_2:
+; X86-X87-NEXT: movl $-2147483648, %edi # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB37_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB37_4:
+; X86-X87-NEXT: fldt {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %esi # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB37_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl %edi, %esi
+; X86-X87-NEXT: .LBB37_6:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: ja .LBB37_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %edi
+; X86-X87-NEXT: .LBB37_8:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jp .LBB37_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edi, %ecx
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB37_10:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_signed_i64_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: cmovbl %ecx, %esi
+; X86-SSE-NEXT: movl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: fldt {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $2147483647, %edx # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_signed_i64_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fistpll -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: jb .LBB37_1
+; X64-NEXT: # %bb.2:
+; X64-NEXT: movq -{{[0-9]+}}(%rsp), %rax
+; X64-NEXT: jmp .LBB37_3
+; X64-NEXT: .LBB37_1:
+; X64-NEXT: movabsq $-9223372036854775808, %rax # imm = 0x8000000000000000
+; X64-NEXT: .LBB37_3:
+; X64-NEXT: fldt {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movabsq $9223372036854775807, %rcx # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rax, %rcx
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovnpq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptosi.sat.i64.f80(x86_fp80 %f)
+ ret i64 %x
+}
+
+define i100 @test_signed_i100_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i100_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fld %st(1)
+; X86-X87-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixxfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-8, %ebx
+; X86-X87-NEXT: jb .LBB38_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: .LBB38_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: jb .LBB38_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebp
+; X86-X87-NEXT: .LBB38_4:
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB38_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB38_6:
+; X86-X87-NEXT: fldt {{\.LCPI.*}}
+; X86-X87-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB38_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB38_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $7, %edi
+; X86-X87-NEXT: ja .LBB38_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: .LBB38_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebp
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB38_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebp # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB38_12:
+; X86-X87-NEXT: movl %ebx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: andl $15, %edx
+; X86-X87-NEXT: movb %dl, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_signed_i100_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $44, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-SSE-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixxfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-SSE-NEXT: xorl %ebp, %ebp
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $-8, %ebx
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB38_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB38_2:
+; X86-SSE-NEXT: fldt {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmoval %eax, %edi
+; X86-SSE-NEXT: cmoval %eax, %edx
+; X86-SSE-NEXT: cmoval %eax, %ecx
+; X86-SSE-NEXT: movl $7, %eax
+; X86-SSE-NEXT: cmovbel %ebx, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovpl %ebp, %eax
+; X86-SSE-NEXT: cmovpl %ebp, %ecx
+; X86-SSE-NEXT: cmovpl %ebp, %edx
+; X86-SSE-NEXT: cmovpl %ebp, %edi
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %ecx, (%esi)
+; X86-SSE-NEXT: andl $15, %eax
+; X86-SSE-NEXT: movb %al, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $44, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i100_f80:
+; X64: # %bb.0:
+; X64-NEXT: subq $40, %rsp
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fstpt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Spill
+; X64-NEXT: fstpt (%rsp)
+; X64-NEXT: callq __fixxfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fldt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Reload
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-34359738368, %rsi # imm = 0xFFFFFFF800000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: fldt {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movabsq $34359738367, %rsi # imm = 0x7FFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: addq $40, %rsp
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptosi.sat.i100.f80(x86_fp80 %f)
+ ret i100 %x
+}
+
+define i128 @test_signed_i128_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_signed_i128_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fld %st(1)
+; X86-X87-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixxfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: movl $0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jae .LBB39_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jae .LBB39_3
+; X86-X87-NEXT: .LBB39_4:
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jb .LBB39_6
+; X86-X87-NEXT: .LBB39_5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB39_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: fldt {{\.LCPI.*}}
+; X86-X87-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $2147483647, %eax # imm = 0x7FFFFFFF
+; X86-X87-NEXT: ja .LBB39_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB39_8:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB39_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB39_10:
+; X86-X87-NEXT: fucomp %st(0)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jp .LBB39_12
+; X86-X87-NEXT: # %bb.11:
+; X86-X87-NEXT: movl %esi, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: movl %ebp, %edx
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %ebx # 4-byte Reload
+; X86-X87-NEXT: .LBB39_12:
+; X86-X87-NEXT: movl %ebx, 12(%ecx)
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %eax, 4(%ecx)
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %eax # 4-byte Reload
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+; X86-X87-NEXT: .LBB39_1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl $0, %edx
+; X86-X87-NEXT: jb .LBB39_4
+; X86-X87-NEXT: .LBB39_3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: movl $-2147483648, %ecx # imm = 0x80000000
+; X86-X87-NEXT: jae .LBB39_5
+; X86-X87-NEXT: jmp .LBB39_6
+;
+; X86-SSE-LABEL: test_signed_i128_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebp
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $44, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-SSE-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixxfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: cmovbl %ecx, %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: cmovbl %ecx, %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: cmovbl %ecx, %edi
+; X86-SSE-NEXT: movl $-2147483648, %ebx # imm = 0x80000000
+; X86-SSE-NEXT: cmovael {{[0-9]+}}(%esp), %ebx
+; X86-SSE-NEXT: fldt {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $2147483647, %ebp # imm = 0x7FFFFFFF
+; X86-SSE-NEXT: cmovbel %ebx, %ebp
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: fucompi %st(0), %st
+; X86-SSE-NEXT: cmovpl %ecx, %eax
+; X86-SSE-NEXT: cmovpl %ecx, %edx
+; X86-SSE-NEXT: cmovpl %ecx, %edi
+; X86-SSE-NEXT: cmovpl %ecx, %ebp
+; X86-SSE-NEXT: movl %ebp, 12(%esi)
+; X86-SSE-NEXT: movl %edi, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %eax, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $44, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: popl %ebp
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_signed_i128_f80:
+; X64: # %bb.0:
+; X64-NEXT: subq $40, %rsp
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fstpt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Spill
+; X64-NEXT: fstpt (%rsp)
+; X64-NEXT: callq __fixxfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fldt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Reload
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rsi # imm = 0x8000000000000000
+; X64-NEXT: cmovbq %rsi, %rdx
+; X64-NEXT: fldt {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: movabsq $9223372036854775807, %rsi # imm = 0x7FFFFFFFFFFFFFFF
+; X64-NEXT: cmovaq %rsi, %rdx
+; X64-NEXT: movq $-1, %rsi
+; X64-NEXT: cmovaq %rsi, %rax
+; X64-NEXT: fucompi %st(0), %st
+; X64-NEXT: cmovpq %rcx, %rax
+; X64-NEXT: cmovpq %rcx, %rdx
+; X64-NEXT: addq $40, %rsp
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptosi.sat.i128.f80(x86_fp80 %f)
+ ret i128 %x
+}
diff --git a/llvm/test/CodeGen/X86/fptoui-sat-scalar.ll b/llvm/test/CodeGen/X86/fptoui-sat-scalar.ll
new file mode 100644
index 000000000000..7ad02208a524
--- /dev/null
+++ b/llvm/test/CodeGen/X86/fptoui-sat-scalar.ll
@@ -0,0 +1,4300 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=i686-linux | FileCheck %s --check-prefixes=X86,X86-X87
+; RUN: llc < %s -mtriple=i686-linux -mattr=+sse2 | FileCheck %s --check-prefixes=X86,X86-SSE
+; RUN: llc < %s -mtriple=x86_64-linux | FileCheck %s --check-prefix=X64
+
+;
+; 32-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f32 (float)
+declare i8 @llvm.fptoui.sat.i8.f32 (float)
+declare i13 @llvm.fptoui.sat.i13.f32 (float)
+declare i16 @llvm.fptoui.sat.i16.f32 (float)
+declare i19 @llvm.fptoui.sat.i19.f32 (float)
+declare i32 @llvm.fptoui.sat.i32.f32 (float)
+declare i50 @llvm.fptoui.sat.i50.f32 (float)
+declare i64 @llvm.fptoui.sat.i64.f32 (float)
+declare i100 @llvm.fptoui.sat.i100.f32(float)
+declare i128 @llvm.fptoui.sat.i128.f32(float)
+
+define i1 @test_unsigned_i1_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i1_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB0_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB0_3
+; X86-X87-NEXT: .LBB0_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB0_3:
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $1, %al
+; X86-X87-NEXT: ja .LBB0_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB0_5:
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i1_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $1, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i1_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptoui.sat.i1.f32(float %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i8_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB1_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB1_3
+; X86-X87-NEXT: .LBB1_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB1_3:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %al
+; X86-X87-NEXT: ja .LBB1_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB1_5:
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i8_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i8_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptoui.sat.i8.f32(float %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i13_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB2_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB2_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-X87-NEXT: ja .LBB2_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB2_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i13_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i13_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptoui.sat.i13.f32(float %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i16_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB3_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB3_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-X87-NEXT: ja .LBB3_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB3_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i16_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i16_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptoui.sat.i16.f32(float %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i19_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB4_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB4_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-X87-NEXT: ja .LBB4_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB4_4:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i19_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movaps %xmm0, %xmm2
+; X86-SSE-NEXT: subss %xmm1, %xmm2
+; X86-SSE-NEXT: cvttss2si %xmm2, %eax
+; X86-SSE-NEXT: xorl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cvttss2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm1
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %ecx, %edx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i19_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptoui.sat.i19.f32(float %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i32_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB5_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB5_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB5_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB5_4:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i32_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movaps %xmm0, %xmm2
+; X86-SSE-NEXT: subss %xmm1, %xmm2
+; X86-SSE-NEXT: cvttss2si %xmm2, %eax
+; X86-SSE-NEXT: xorl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cvttss2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm1
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %ecx, %edx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i32_f32:
+; X64: # %bb.0:
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptoui.sat.i32.f32(float %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i50_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $16, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setbe %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jbe .LBB6_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: .LBB6_2:
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsubr %st(2), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: jb .LBB6_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB6_4:
+; X86-X87-NEXT: jb .LBB6_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB6_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB6_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %eax
+; X86-X87-NEXT: .LBB6_8:
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB6_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB6_10:
+; X86-X87-NEXT: addl $16, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i50_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm2 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm2
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: jbe .LBB6_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: xorps %xmm2, %xmm2
+; X86-SSE-NEXT: .LBB6_2:
+; X86-SSE-NEXT: movaps %xmm0, %xmm3
+; X86-SSE-NEXT: subss %xmm2, %xmm3
+; X86-SSE-NEXT: movss %xmm3, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: setbe %cl
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %esi
+; X86-SSE-NEXT: jb .LBB6_4
+; X86-SSE-NEXT: # %bb.3:
+; X86-SSE-NEXT: movzbl %cl, %eax
+; X86-SSE-NEXT: shll $31, %eax
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: .LBB6_4:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i50_f32:
+; X64: # %bb.0:
+; X64-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: movaps %xmm0, %xmm2
+; X64-NEXT: subss %xmm1, %xmm2
+; X64-NEXT: cvttss2si %xmm2, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: xorq %rax, %rcx
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rcx, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $1125899906842623, %rax # imm = 0x3FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptoui.sat.i50.f32(float %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i64_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setbe %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jbe .LBB7_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: .LBB7_2:
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsubr %st(2), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %esi, %esi
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB7_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edi
+; X86-X87-NEXT: .LBB7_4:
+; X86-X87-NEXT: jb .LBB7_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB7_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB7_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: .LBB7_8:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i64_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $20, %esp
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm2 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm2
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: jbe .LBB7_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: xorps %xmm2, %xmm2
+; X86-SSE-NEXT: .LBB7_2:
+; X86-SSE-NEXT: movaps %xmm0, %xmm3
+; X86-SSE-NEXT: subss %xmm2, %xmm3
+; X86-SSE-NEXT: movss %xmm3, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: setbe %cl
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %eax
+; X86-SSE-NEXT: jb .LBB7_4
+; X86-SSE-NEXT: # %bb.3:
+; X86-SSE-NEXT: movzbl %cl, %edx
+; X86-SSE-NEXT: shll $31, %edx
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB7_4:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %ecx
+; X86-SSE-NEXT: cmoval %ecx, %edx
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: addl $20, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i64_f32:
+; X64: # %bb.0:
+; X64-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: movaps %xmm0, %xmm2
+; X64-NEXT: subss %xmm1, %xmm2
+; X64-NEXT: cvttss2si %xmm2, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: xorq %rax, %rcx
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rcx, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movq $-1, %rax
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptoui.sat.i64.f32(float %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i100_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunssfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edi, %edi
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB8_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB8_2:
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB8_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB8_4:
+; X86-X87-NEXT: jb .LBB8_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB8_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $15, %eax
+; X86-X87-NEXT: ja .LBB8_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: .LBB8_8:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB8_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl %esi, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-X87-NEXT: .LBB8_10:
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %edi, (%ecx)
+; X86-X87-NEXT: andl $15, %eax
+; X86-X87-NEXT: movb %al, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i100_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $32, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixunssfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: xorps %xmm0, %xmm0
+; X86-SSE-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm1
+; X86-SSE-NEXT: movaps %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB8_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB8_2:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $15, %ebx
+; X86-SSE-NEXT: cmovbel %edi, %ebx
+; X86-SSE-NEXT: movl $-1, %edi
+; X86-SSE-NEXT: cmoval %edi, %edx
+; X86-SSE-NEXT: cmoval %edi, %ecx
+; X86-SSE-NEXT: cmoval %edi, %eax
+; X86-SSE-NEXT: movl %eax, 8(%esi)
+; X86-SSE-NEXT: movl %ecx, 4(%esi)
+; X86-SSE-NEXT: movl %edx, (%esi)
+; X86-SSE-NEXT: andl $15, %ebx
+; X86-SSE-NEXT: movb %bl, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $32, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i100_f32:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixunssfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm0, %xmm0
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm1 # 4-byte Reload
+; X64-NEXT: # xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss %xmm0, %xmm1
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm1
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: movabsq $68719476735, %rcx # imm = 0xFFFFFFFFF
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptoui.sat.i100.f32(float %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f32(float %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i128_f32:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: flds {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunssfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB9_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB9_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: jb .LBB9_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB9_4:
+; X86-X87-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB9_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB9_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB9_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ebx, %eax
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB9_8:
+; X86-X87-NEXT: movl %esi, 12(%ecx)
+; X86-X87-NEXT: movl %edi, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i128_f32:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $32, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixunssfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: xorps %xmm0, %xmm0
+; X86-SSE-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm1
+; X86-SSE-NEXT: movaps %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB9_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB9_2:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %ecx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: movl %eax, 12(%esi)
+; X86-SSE-NEXT: movl %ecx, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %edi, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $32, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i128_f32:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixunssfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm0, %xmm0
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm1 # 4-byte Reload
+; X64-NEXT: # xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss %xmm0, %xmm1
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm1
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptoui.sat.i128.f32(float %f)
+ ret i128 %x
+}
+
+;
+; 64-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f64 (double)
+declare i8 @llvm.fptoui.sat.i8.f64 (double)
+declare i13 @llvm.fptoui.sat.i13.f64 (double)
+declare i16 @llvm.fptoui.sat.i16.f64 (double)
+declare i19 @llvm.fptoui.sat.i19.f64 (double)
+declare i32 @llvm.fptoui.sat.i32.f64 (double)
+declare i50 @llvm.fptoui.sat.i50.f64 (double)
+declare i64 @llvm.fptoui.sat.i64.f64 (double)
+declare i100 @llvm.fptoui.sat.i100.f64(double)
+declare i128 @llvm.fptoui.sat.i128.f64(double)
+
+define i1 @test_unsigned_i1_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i1_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB10_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB10_3
+; X86-X87-NEXT: .LBB10_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB10_3:
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $1, %al
+; X86-X87-NEXT: ja .LBB10_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB10_5:
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i1_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $1, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i1_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptoui.sat.i1.f64(double %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i8_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB11_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB11_3
+; X86-X87-NEXT: .LBB11_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB11_3:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %al
+; X86-X87-NEXT: ja .LBB11_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB11_5:
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i8_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i8_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptoui.sat.i8.f64(double %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i13_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB12_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB12_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-X87-NEXT: ja .LBB12_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB12_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i13_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i13_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptoui.sat.i13.f64(double %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i16_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB13_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB13_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-X87-NEXT: ja .LBB13_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB13_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i16_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: cvttsd2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i16_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptoui.sat.i16.f64(double %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i19_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB14_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB14_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-X87-NEXT: ja .LBB14_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB14_4:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i19_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd {{.*#+}} xmm1 = mem[0],zero
+; X86-SSE-NEXT: movapd %xmm0, %xmm2
+; X86-SSE-NEXT: subsd %xmm1, %xmm2
+; X86-SSE-NEXT: cvttsd2si %xmm2, %eax
+; X86-SSE-NEXT: xorl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cvttsd2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm1
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %ecx, %edx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i19_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptoui.sat.i19.f64(double %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i32_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB15_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB15_2:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB15_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB15_4:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i32_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd {{.*#+}} xmm1 = mem[0],zero
+; X86-SSE-NEXT: movapd %xmm0, %xmm2
+; X86-SSE-NEXT: subsd %xmm1, %xmm2
+; X86-SSE-NEXT: cvttsd2si %xmm2, %eax
+; X86-SSE-NEXT: xorl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cvttsd2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm1
+; X86-SSE-NEXT: cmovbel %eax, %ecx
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %ecx, %edx
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i32_f64:
+; X64: # %bb.0:
+; X64-NEXT: cvttsd2si %xmm0, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptoui.sat.i32.f64(double %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i50_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $16, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setbe %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jbe .LBB16_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: .LBB16_2:
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsubr %st(2), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: jb .LBB16_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB16_4:
+; X86-X87-NEXT: jb .LBB16_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB16_6:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB16_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %eax
+; X86-X87-NEXT: .LBB16_8:
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB16_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB16_10:
+; X86-X87-NEXT: addl $16, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i50_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd {{.*#+}} xmm2 = mem[0],zero
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm2
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: jbe .LBB16_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: xorpd %xmm2, %xmm2
+; X86-SSE-NEXT: .LBB16_2:
+; X86-SSE-NEXT: movapd %xmm0, %xmm3
+; X86-SSE-NEXT: subsd %xmm2, %xmm3
+; X86-SSE-NEXT: movsd %xmm3, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: setbe %cl
+; X86-SSE-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %esi
+; X86-SSE-NEXT: jb .LBB16_4
+; X86-SSE-NEXT: # %bb.3:
+; X86-SSE-NEXT: movzbl %cl, %eax
+; X86-SSE-NEXT: shll $31, %eax
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: .LBB16_4:
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i50_f64:
+; X64: # %bb.0:
+; X64-NEXT: movsd {{.*#+}} xmm1 = mem[0],zero
+; X64-NEXT: movapd %xmm0, %xmm2
+; X64-NEXT: subsd %xmm1, %xmm2
+; X64-NEXT: cvttsd2si %xmm2, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: xorq %rax, %rcx
+; X64-NEXT: cvttsd2si %xmm0, %rax
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rcx, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $1125899906842623, %rax # imm = 0x3FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptoui.sat.i50.f64(double %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i64_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setbe %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jbe .LBB17_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: .LBB17_2:
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsubr %st(2), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %esi, %esi
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB17_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edi
+; X86-X87-NEXT: .LBB17_4:
+; X86-X87-NEXT: jb .LBB17_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB17_6:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB17_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: .LBB17_8:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i64_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $20, %esp
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd {{.*#+}} xmm2 = mem[0],zero
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm2
+; X86-SSE-NEXT: xorpd %xmm1, %xmm1
+; X86-SSE-NEXT: jbe .LBB17_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: xorpd %xmm2, %xmm2
+; X86-SSE-NEXT: .LBB17_2:
+; X86-SSE-NEXT: movapd %xmm0, %xmm3
+; X86-SSE-NEXT: subsd %xmm2, %xmm3
+; X86-SSE-NEXT: movsd %xmm3, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: setbe %cl
+; X86-SSE-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: ucomisd %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %eax
+; X86-SSE-NEXT: jb .LBB17_4
+; X86-SSE-NEXT: # %bb.3:
+; X86-SSE-NEXT: movzbl %cl, %edx
+; X86-SSE-NEXT: shll $31, %edx
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB17_4:
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %ecx
+; X86-SSE-NEXT: cmoval %ecx, %edx
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: addl $20, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i64_f64:
+; X64: # %bb.0:
+; X64-NEXT: movsd {{.*#+}} xmm1 = mem[0],zero
+; X64-NEXT: movapd %xmm0, %xmm2
+; X64-NEXT: subsd %xmm1, %xmm2
+; X64-NEXT: cvttsd2si %xmm2, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: xorq %rax, %rcx
+; X64-NEXT: cvttsd2si %xmm0, %rax
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rcx, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm1, %xmm1
+; X64-NEXT: ucomisd %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm0
+; X64-NEXT: movq $-1, %rax
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptoui.sat.i64.f64(double %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i100_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fstl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fstl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunsdfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edi, %edi
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB18_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB18_2:
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB18_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB18_4:
+; X86-X87-NEXT: jb .LBB18_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB18_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fldl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $15, %eax
+; X86-X87-NEXT: ja .LBB18_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: .LBB18_8:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB18_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl %esi, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-X87-NEXT: .LBB18_10:
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %edi, (%ecx)
+; X86-X87-NEXT: andl $15, %eax
+; X86-X87-NEXT: movb %al, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i100_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $32, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixunsdfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: xorpd %xmm0, %xmm0
+; X86-SSE-NEXT: movsd {{.*#+}} xmm1 = mem[0],zero
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm1
+; X86-SSE-NEXT: movapd %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB18_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB18_2:
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $15, %ebx
+; X86-SSE-NEXT: cmovbel %edi, %ebx
+; X86-SSE-NEXT: movl $-1, %edi
+; X86-SSE-NEXT: cmoval %edi, %edx
+; X86-SSE-NEXT: cmoval %edi, %ecx
+; X86-SSE-NEXT: cmoval %edi, %eax
+; X86-SSE-NEXT: movl %eax, 8(%esi)
+; X86-SSE-NEXT: movl %ecx, 4(%esi)
+; X86-SSE-NEXT: movl %edx, (%esi)
+; X86-SSE-NEXT: andl $15, %ebx
+; X86-SSE-NEXT: movb %bl, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $32, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i100_f64:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movsd %xmm0, (%rsp) # 8-byte Spill
+; X64-NEXT: callq __fixunsdfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm0, %xmm0
+; X64-NEXT: movsd (%rsp), %xmm1 # 8-byte Reload
+; X64-NEXT: # xmm1 = mem[0],zero
+; X64-NEXT: ucomisd %xmm0, %xmm1
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm1
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: movabsq $68719476735, %rcx # imm = 0xFFFFFFFFF
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptoui.sat.i100.f64(double %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f64(double %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i128_f64:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fstl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fstl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunsdfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB19_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB19_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: jb .LBB19_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB19_4:
+; X86-X87-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB19_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB19_6:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fldl {{[-0-9]+}}(%e{{[sb]}}p) # 8-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB19_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ebx, %eax
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB19_8:
+; X86-X87-NEXT: movl %esi, 12(%ecx)
+; X86-X87-NEXT: movl %edi, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i128_f64:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $32, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
+; X86-SSE-NEXT: movsd %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixunsdfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: xorpd %xmm0, %xmm0
+; X86-SSE-NEXT: movsd {{.*#+}} xmm1 = mem[0],zero
+; X86-SSE-NEXT: ucomisd %xmm0, %xmm1
+; X86-SSE-NEXT: movapd %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB19_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB19_2:
+; X86-SSE-NEXT: ucomisd {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %ecx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: movl %eax, 12(%esi)
+; X86-SSE-NEXT: movl %ecx, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %edi, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $32, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i128_f64:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movsd %xmm0, (%rsp) # 8-byte Spill
+; X64-NEXT: callq __fixunsdfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorpd %xmm0, %xmm0
+; X64-NEXT: movsd (%rsp), %xmm1 # 8-byte Reload
+; X64-NEXT: # xmm1 = mem[0],zero
+; X64-NEXT: ucomisd %xmm0, %xmm1
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: ucomisd {{.*}}(%rip), %xmm1
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptoui.sat.i128.f64(double %f)
+ ret i128 %x
+}
+
+;
+; 16-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f16 (half)
+declare i8 @llvm.fptoui.sat.i8.f16 (half)
+declare i13 @llvm.fptoui.sat.i13.f16 (half)
+declare i16 @llvm.fptoui.sat.i16.f16 (half)
+declare i19 @llvm.fptoui.sat.i19.f16 (half)
+declare i32 @llvm.fptoui.sat.i32.f16 (half)
+declare i50 @llvm.fptoui.sat.i50.f16 (half)
+declare i64 @llvm.fptoui.sat.i64.f16 (half)
+declare i100 @llvm.fptoui.sat.i100.f16(half)
+declare i128 @llvm.fptoui.sat.i128.f16(half)
+
+define i1 @test_unsigned_i1_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i1_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB20_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB20_3
+; X86-X87-NEXT: .LBB20_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB20_3:
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $1, %al
+; X86-X87-NEXT: ja .LBB20_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB20_5:
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i1_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $1, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i1_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptoui.sat.i1.f16(half %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i8_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB21_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB21_3
+; X86-X87-NEXT: .LBB21_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB21_3:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %al
+; X86-X87-NEXT: ja .LBB21_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB21_5:
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i8_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i8_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptoui.sat.i8.f16(half %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i13_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB22_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB22_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-X87-NEXT: ja .LBB22_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB22_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i13_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i13_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptoui.sat.i13.f16(half %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i16_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $12, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB23_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB23_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-X87-NEXT: ja .LBB23_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB23_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $12, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i16_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: cvttss2si %xmm0, %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i16_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptoui.sat.i16.f16(half %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i19_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $28, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB24_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB24_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-X87-NEXT: ja .LBB24_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB24_4:
+; X86-X87-NEXT: addl $28, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i19_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movaps %xmm0, %xmm2
+; X86-SSE-NEXT: subss %xmm1, %xmm2
+; X86-SSE-NEXT: cvttss2si %xmm2, %eax
+; X86-SSE-NEXT: xorl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cvttss2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %ecx, %edx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i19_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptoui.sat.i19.f16(half %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i32_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $28, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB25_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB25_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB25_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB25_4:
+; X86-X87-NEXT: addl $28, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i32_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $12, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movaps %xmm0, %xmm2
+; X86-SSE-NEXT: subss %xmm1, %xmm2
+; X86-SSE-NEXT: cvttss2si %xmm2, %eax
+; X86-SSE-NEXT: xorl $-2147483648, %eax # imm = 0x80000000
+; X86-SSE-NEXT: cvttss2si %xmm0, %ecx
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: cmovael %ecx, %edx
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: addl $12, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i32_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movl $-1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptoui.sat.i32.f16(half %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i50_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $24, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setae %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jae .LBB26_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(2)
+; X86-X87-NEXT: fld %st(1)
+; X86-X87-NEXT: fxch %st(2)
+; X86-X87-NEXT: .LBB26_2:
+; X86-X87-NEXT: fxch %st(2)
+; X86-X87-NEXT: fsubr %st(1), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: jb .LBB26_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB26_4:
+; X86-X87-NEXT: jb .LBB26_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB26_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB26_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %eax
+; X86-X87-NEXT: .LBB26_8:
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB26_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB26_10:
+; X86-X87-NEXT: addl $24, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i50_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $24, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm2 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm2, %xmm0
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: jae .LBB26_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: xorps %xmm2, %xmm2
+; X86-SSE-NEXT: .LBB26_2:
+; X86-SSE-NEXT: movaps %xmm0, %xmm3
+; X86-SSE-NEXT: subss %xmm2, %xmm3
+; X86-SSE-NEXT: movss %xmm3, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: setae %cl
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %esi
+; X86-SSE-NEXT: jb .LBB26_4
+; X86-SSE-NEXT: # %bb.3:
+; X86-SSE-NEXT: movzbl %cl, %eax
+; X86-SSE-NEXT: shll $31, %eax
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: .LBB26_4:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %eax, %edx
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %esi, %eax
+; X86-SSE-NEXT: addl $24, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i50_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: movaps %xmm0, %xmm2
+; X64-NEXT: subss %xmm1, %xmm2
+; X64-NEXT: cvttss2si %xmm2, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: xorq %rax, %rcx
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rcx, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movabsq $1125899906842623, %rax # imm = 0x3FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptoui.sat.i50.f16(half %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i64_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setae %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jae .LBB27_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(2)
+; X86-X87-NEXT: fld %st(1)
+; X86-X87-NEXT: fxch %st(2)
+; X86-X87-NEXT: .LBB27_2:
+; X86-X87-NEXT: fxch %st(2)
+; X86-X87-NEXT: fsubr %st(1), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %esi, %esi
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB27_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edi
+; X86-X87-NEXT: .LBB27_4:
+; X86-X87-NEXT: jb .LBB27_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB27_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB27_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: .LBB27_8:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i64_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $28, %esp
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss {{.*#+}} xmm2 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm2, %xmm0
+; X86-SSE-NEXT: xorps %xmm1, %xmm1
+; X86-SSE-NEXT: jae .LBB27_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: xorps %xmm2, %xmm2
+; X86-SSE-NEXT: .LBB27_2:
+; X86-SSE-NEXT: movaps %xmm0, %xmm3
+; X86-SSE-NEXT: subss %xmm2, %xmm3
+; X86-SSE-NEXT: movss %xmm3, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: setae %cl
+; X86-SSE-NEXT: flds {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: ucomiss %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %eax
+; X86-SSE-NEXT: jb .LBB27_4
+; X86-SSE-NEXT: # %bb.3:
+; X86-SSE-NEXT: movzbl %cl, %edx
+; X86-SSE-NEXT: shll $31, %edx
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: .LBB27_4:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %ecx
+; X86-SSE-NEXT: cmoval %ecx, %edx
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: addl $28, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i64_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: movss {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: movaps %xmm0, %xmm2
+; X64-NEXT: subss %xmm1, %xmm2
+; X64-NEXT: cvttss2si %xmm2, %rax
+; X64-NEXT: movabsq $-9223372036854775808, %rcx # imm = 0x8000000000000000
+; X64-NEXT: xorq %rax, %rcx
+; X64-NEXT: cvttss2si %xmm0, %rax
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rcx, %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm1, %xmm1
+; X64-NEXT: ucomiss %xmm1, %xmm0
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm0
+; X64-NEXT: movq $-1, %rax
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptoui.sat.i64.f16(half %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i100_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunssfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edi, %edi
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB28_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB28_2:
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB28_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB28_4:
+; X86-X87-NEXT: jb .LBB28_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB28_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $15, %eax
+; X86-X87-NEXT: ja .LBB28_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: .LBB28_8:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB28_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl %esi, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-X87-NEXT: .LBB28_10:
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %edi, (%ecx)
+; X86-X87-NEXT: andl $15, %eax
+; X86-X87-NEXT: movb %al, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i100_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $32, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: calll __fixunssfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: xorps %xmm0, %xmm0
+; X86-SSE-NEXT: movss {{[-0-9]+}}(%e{{[sb]}}p), %xmm1 # 4-byte Reload
+; X86-SSE-NEXT: # xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm1
+; X86-SSE-NEXT: movaps %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB28_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB28_2:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $15, %ebx
+; X86-SSE-NEXT: cmovbel %edi, %ebx
+; X86-SSE-NEXT: movl $-1, %edi
+; X86-SSE-NEXT: cmoval %edi, %edx
+; X86-SSE-NEXT: cmoval %edi, %ecx
+; X86-SSE-NEXT: cmoval %edi, %eax
+; X86-SSE-NEXT: movl %eax, 8(%esi)
+; X86-SSE-NEXT: movl %ecx, 4(%esi)
+; X86-SSE-NEXT: movl %edx, (%esi)
+; X86-SSE-NEXT: andl $15, %ebx
+; X86-SSE-NEXT: movb %bl, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $32, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i100_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixunssfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm0, %xmm0
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm1 # 4-byte Reload
+; X64-NEXT: # xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss %xmm0, %xmm1
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm1
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: movabsq $68719476735, %rcx # imm = 0xFFFFFFFFF
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptoui.sat.i100.f16(half %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f16(half %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i128_f16:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $44, %esp
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: calll __gnu_h2f_ieee
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fsts {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsts {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Spill
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunssfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB29_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB29_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: jb .LBB29_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB29_4:
+; X86-X87-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB29_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB29_6:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: flds {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB29_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ebx, %eax
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB29_8:
+; X86-X87-NEXT: movl %esi, 12(%ecx)
+; X86-X87-NEXT: movl %edi, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $44, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i128_f16:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $32, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __gnu_h2f_ieee
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: fstps {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movss {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: movss %xmm0, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-SSE-NEXT: movss %xmm0, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: calll __fixunssfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: xorps %xmm0, %xmm0
+; X86-SSE-NEXT: movss {{[-0-9]+}}(%e{{[sb]}}p), %xmm1 # 4-byte Reload
+; X86-SSE-NEXT: # xmm1 = mem[0],zero,zero,zero
+; X86-SSE-NEXT: ucomiss %xmm0, %xmm1
+; X86-SSE-NEXT: movaps %xmm1, %xmm0
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB29_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB29_2:
+; X86-SSE-NEXT: ucomiss {{\.LCPI.*}}, %xmm0
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %ecx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: movl %eax, 12(%esi)
+; X86-SSE-NEXT: movl %ecx, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %edi, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $32, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i128_f16:
+; X64: # %bb.0:
+; X64-NEXT: pushq %rax
+; X64-NEXT: movzwl %di, %edi
+; X64-NEXT: callq __gnu_h2f_ieee at PLT
+; X64-NEXT: movss %xmm0, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
+; X64-NEXT: callq __fixunssfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: xorps %xmm0, %xmm0
+; X64-NEXT: movss {{[-0-9]+}}(%r{{[sb]}}p), %xmm1 # 4-byte Reload
+; X64-NEXT: # xmm1 = mem[0],zero,zero,zero
+; X64-NEXT: ucomiss %xmm0, %xmm1
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: ucomiss {{.*}}(%rip), %xmm1
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: popq %rcx
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptoui.sat.i128.f16(half %f)
+ ret i128 %x
+}
+
+;
+; 80-bit float to unsigned integer
+;
+
+declare i1 @llvm.fptoui.sat.i1.f80 (x86_fp80)
+declare i8 @llvm.fptoui.sat.i8.f80 (x86_fp80)
+declare i13 @llvm.fptoui.sat.i13.f80 (x86_fp80)
+declare i16 @llvm.fptoui.sat.i16.f80 (x86_fp80)
+declare i19 @llvm.fptoui.sat.i19.f80 (x86_fp80)
+declare i32 @llvm.fptoui.sat.i32.f80 (x86_fp80)
+declare i50 @llvm.fptoui.sat.i50.f80 (x86_fp80)
+declare i64 @llvm.fptoui.sat.i64.f80 (x86_fp80)
+declare i100 @llvm.fptoui.sat.i100.f80(x86_fp80)
+declare i128 @llvm.fptoui.sat.i128.f80(x86_fp80)
+
+define i1 @test_unsigned_i1_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i1_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB30_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB30_3
+; X86-X87-NEXT: .LBB30_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB30_3:
+; X86-X87-NEXT: fld1
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $1, %al
+; X86-X87-NEXT: ja .LBB30_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB30_5:
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i1_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fists {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzbl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: fld1
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $1, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i1_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fists -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzbl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: fld1
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movl $1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i1 @llvm.fptoui.sat.i1.f80(x86_fp80 %f)
+ ret i1 %x
+}
+
+define i8 @test_unsigned_i8_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i8_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fists {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB31_1
+; X86-X87-NEXT: # %bb.2:
+; X86-X87-NEXT: movb {{[0-9]+}}(%esp), %cl
+; X86-X87-NEXT: jmp .LBB31_3
+; X86-X87-NEXT: .LBB31_1:
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: .LBB31_3:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movb $-1, %al
+; X86-X87-NEXT: ja .LBB31_5
+; X86-X87-NEXT: # %bb.4:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB31_5:
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i8_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fists {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzbl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: cmovael %eax, %ecx
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $255, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $al killed $al killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i8_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fists -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzbl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovael %eax, %ecx
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movl $255, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $al killed $al killed $eax
+; X64-NEXT: retq
+ %x = call i8 @llvm.fptoui.sat.i8.f80(x86_fp80 %f)
+ ret i8 %x
+}
+
+define i13 @test_unsigned_i13_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i13_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB32_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB32_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-X87-NEXT: ja .LBB32_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB32_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i13_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw (%esp)
+; X86-SSE-NEXT: movzwl (%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw (%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: jb .LBB32_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: .LBB32_2:
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i13_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fistl -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: jb .LBB32_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: .LBB32_2:
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movl $8191, %eax # imm = 0x1FFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i13 @llvm.fptoui.sat.i13.f80(x86_fp80 %f)
+ ret i13 %x
+}
+
+define i16 @test_unsigned_i16_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i16_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $8, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw (%esp)
+; X86-X87-NEXT: movzwl (%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB33_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB33_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-X87-NEXT: ja .LBB33_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB33_4:
+; X86-X87-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-X87-NEXT: addl $8, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i16_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $8, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw (%esp)
+; X86-SSE-NEXT: movzwl (%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistl {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw (%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: jb .LBB33_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: .LBB33_2:
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: # kill: def $ax killed $ax killed $eax
+; X86-SSE-NEXT: addl $8, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i16_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fistl -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: jb .LBB33_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: .LBB33_2:
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movl $65535, %eax # imm = 0xFFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: # kill: def $ax killed $ax killed $eax
+; X64-NEXT: retq
+ %x = call i16 @llvm.fptoui.sat.i16.f80(x86_fp80 %f)
+ ret i16 %x
+}
+
+define i19 @test_unsigned_i19_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i19_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB34_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB34_2:
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-X87-NEXT: ja .LBB34_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB34_4:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i19_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $20, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: jb .LBB34_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: .LBB34_2:
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: addl $20, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i19_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fistpll -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: jb .LBB34_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: .LBB34_2:
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movl $524287, %eax # imm = 0x7FFFF
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: retq
+ %x = call i19 @llvm.fptoui.sat.i19.f80(x86_fp80 %f)
+ ret i19 %x
+}
+
+define i32 @test_unsigned_i32_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i32_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-X87-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: jb .LBB35_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB35_2:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB35_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: .LBB35_4:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i32_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: subl $20, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: jb .LBB35_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: .LBB35_2:
+; X86-SSE-NEXT: fldl {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %ecx, %eax
+; X86-SSE-NEXT: addl $20, %esp
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i32_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %eax
+; X64-NEXT: orl $3072, %eax # imm = 0xC00
+; X64-NEXT: movw %ax, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fistpll -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: jb .LBB35_2
+; X64-NEXT: # %bb.1:
+; X64-NEXT: movl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: .LBB35_2:
+; X64-NEXT: fldl {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movl $-1, %eax
+; X64-NEXT: cmovbel %ecx, %eax
+; X64-NEXT: retq
+ %x = call i32 @llvm.fptoui.sat.i32.f80(x86_fp80 %f)
+ ret i32 %x
+}
+
+define i50 @test_unsigned_i50_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i50_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $16, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setbe %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jbe .LBB36_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: .LBB36_2:
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsubr %st(2), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: jb .LBB36_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %esi
+; X86-X87-NEXT: .LBB36_4:
+; X86-X87-NEXT: jb .LBB36_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB36_6:
+; X86-X87-NEXT: fldl {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: ja .LBB36_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edx, %eax
+; X86-X87-NEXT: .LBB36_8:
+; X86-X87-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-X87-NEXT: ja .LBB36_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %esi, %edx
+; X86-X87-NEXT: .LBB36_10:
+; X86-X87-NEXT: addl $16, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i50_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: setbe %cl
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fcmovbe %st(2), %st
+; X86-SSE-NEXT: fstp %st(2)
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fsubr %st(2), %st
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-SSE-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %esi, %esi
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: jb .LBB36_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movb %cl, %al
+; X86-SSE-NEXT: shll $31, %eax
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl %eax, %esi
+; X86-SSE-NEXT: .LBB36_2:
+; X86-SSE-NEXT: fldl {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $-1, %eax
+; X86-SSE-NEXT: cmovbel %edx, %eax
+; X86-SSE-NEXT: movl $262143, %edx # imm = 0x3FFFF
+; X86-SSE-NEXT: cmovbel %esi, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i50_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: setbe %al
+; X64-NEXT: fldz
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fcmovbe %st(2), %st
+; X64-NEXT: fstp %st(2)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fsubr %st(2), %st
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: orl $3072, %ecx # imm = 0xC00
+; X64-NEXT: movw %cx, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fistpll -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: shlq $63, %rax
+; X64-NEXT: xorq -{{[0-9]+}}(%rsp), %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: fldl {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movabsq $1125899906842623, %rax # imm = 0x3FFFFFFFFFFFF
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i50 @llvm.fptoui.sat.i50.f80(x86_fp80 %f)
+ ret i50 %x
+}
+
+define i64 @test_unsigned_i64_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i64_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $20, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: flds {{\.LCPI.*}}
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %ecx, %ecx
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: setbe %al
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: jbe .LBB37_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: .LBB37_2:
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fsubr %st(2), %st
+; X86-X87-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movzwl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: orl $3072, %edx # imm = 0xC00
+; X86-X87-NEXT: movw %dx, {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-X87-NEXT: movb %al, %cl
+; X86-X87-NEXT: shll $31, %ecx
+; X86-X87-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucom %st(1)
+; X86-X87-NEXT: fstp %st(1)
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: xorl %esi, %esi
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %edi
+; X86-X87-NEXT: jb .LBB37_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl %ecx, %edi
+; X86-X87-NEXT: .LBB37_4:
+; X86-X87-NEXT: jb .LBB37_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB37_6:
+; X86-X87-NEXT: fldt {{\.LCPI.*}}
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB37_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %esi, %eax
+; X86-X87-NEXT: movl %edi, %edx
+; X86-X87-NEXT: .LBB37_8:
+; X86-X87-NEXT: addl $20, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: retl
+;
+; X86-SSE-LABEL: test_unsigned_i64_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: subl $16, %esp
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: flds {{\.LCPI.*}}
+; X86-SSE-NEXT: xorl %ecx, %ecx
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: setbe %bl
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fcmovbe %st(2), %st
+; X86-SSE-NEXT: fstp %st(2)
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fsubr %st(2), %st
+; X86-SSE-NEXT: fnstcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: movzwl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: orl $3072, %eax # imm = 0xC00
+; X86-SSE-NEXT: movw %ax, {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fistpll {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fldcw {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: xorl %edx, %edx
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $0, %eax
+; X86-SSE-NEXT: jb .LBB37_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movb %bl, %cl
+; X86-SSE-NEXT: shll $31, %ecx
+; X86-SSE-NEXT: xorl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %ecx, %edx
+; X86-SSE-NEXT: .LBB37_2:
+; X86-SSE-NEXT: fldt {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $-1, %ecx
+; X86-SSE-NEXT: cmoval %ecx, %eax
+; X86-SSE-NEXT: cmoval %ecx, %edx
+; X86-SSE-NEXT: addl $16, %esp
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl
+;
+; X64-LABEL: test_unsigned_i64_f80:
+; X64: # %bb.0:
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: flds {{.*}}(%rip)
+; X64-NEXT: xorl %eax, %eax
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: setbe %al
+; X64-NEXT: fldz
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fcmovbe %st(2), %st
+; X64-NEXT: fstp %st(2)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fsubr %st(2), %st
+; X64-NEXT: fnstcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: movzwl -{{[0-9]+}}(%rsp), %ecx
+; X64-NEXT: orl $3072, %ecx # imm = 0xC00
+; X64-NEXT: movw %cx, -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: fistpll -{{[0-9]+}}(%rsp)
+; X64-NEXT: fldcw -{{[0-9]+}}(%rsp)
+; X64-NEXT: shlq $63, %rax
+; X64-NEXT: xorq -{{[0-9]+}}(%rsp), %rax
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovaeq %rax, %rcx
+; X64-NEXT: fldt {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movq $-1, %rax
+; X64-NEXT: cmovbeq %rcx, %rax
+; X64-NEXT: retq
+ %x = call i64 @llvm.fptoui.sat.i64.f80(x86_fp80 %f)
+ ret i64 %x
+}
+
+define i100 @test_unsigned_i100_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i100_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fld %st(1)
+; X86-X87-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunsxfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edi, %edi
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB38_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB38_2:
+; X86-X87-NEXT: movl $0, %esi
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB38_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-X87-NEXT: .LBB38_4:
+; X86-X87-NEXT: jb .LBB38_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-X87-NEXT: .LBB38_6:
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: fldt {{\.LCPI.*}}
+; X86-X87-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $15, %eax
+; X86-X87-NEXT: ja .LBB38_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %edi, %eax
+; X86-X87-NEXT: .LBB38_8:
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edx
+; X86-X87-NEXT: ja .LBB38_10
+; X86-X87-NEXT: # %bb.9:
+; X86-X87-NEXT: movl %ebx, %edi
+; X86-X87-NEXT: movl %esi, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edx # 4-byte Reload
+; X86-X87-NEXT: .LBB38_10:
+; X86-X87-NEXT: movl %edx, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %edi, (%ecx)
+; X86-X87-NEXT: andl $15, %eax
+; X86-X87-NEXT: movb %al, 12(%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i100_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $48, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-SSE-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixunsxfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB38_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB38_2:
+; X86-SSE-NEXT: fldt {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $15, %ebx
+; X86-SSE-NEXT: cmovbel %edi, %ebx
+; X86-SSE-NEXT: movl $-1, %edi
+; X86-SSE-NEXT: cmoval %edi, %edx
+; X86-SSE-NEXT: cmoval %edi, %ecx
+; X86-SSE-NEXT: cmoval %edi, %eax
+; X86-SSE-NEXT: movl %eax, 8(%esi)
+; X86-SSE-NEXT: movl %ecx, 4(%esi)
+; X86-SSE-NEXT: movl %edx, (%esi)
+; X86-SSE-NEXT: andl $15, %ebx
+; X86-SSE-NEXT: movb %bl, 12(%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $48, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i100_f80:
+; X64: # %bb.0:
+; X64-NEXT: subq $40, %rsp
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fstpt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Spill
+; X64-NEXT: fstpt (%rsp)
+; X64-NEXT: callq __fixunsxfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fldt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Reload
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: fldt {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: movabsq $68719476735, %rcx # imm = 0xFFFFFFFFF
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: addq $40, %rsp
+; X64-NEXT: retq
+ %x = call i100 @llvm.fptoui.sat.i100.f80(x86_fp80 %f)
+ ret i100 %x
+}
+
+define i128 @test_unsigned_i128_f80(x86_fp80 %f) nounwind {
+; X86-X87-LABEL: test_unsigned_i128_f80:
+; X86-X87: # %bb.0:
+; X86-X87-NEXT: pushl %ebp
+; X86-X87-NEXT: pushl %ebx
+; X86-X87-NEXT: pushl %edi
+; X86-X87-NEXT: pushl %esi
+; X86-X87-NEXT: subl $60, %esp
+; X86-X87-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: fld %st(0)
+; X86-X87-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-X87-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: movl %eax, (%esp)
+; X86-X87-NEXT: fldz
+; X86-X87-NEXT: fld %st(1)
+; X86-X87-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-X87-NEXT: fxch %st(1)
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: movl %eax, %ebx
+; X86-X87-NEXT: calll __fixunsxfti
+; X86-X87-NEXT: subl $4, %esp
+; X86-X87-NEXT: xorl %edx, %edx
+; X86-X87-NEXT: movb %bh, %ah
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $0, %eax
+; X86-X87-NEXT: jb .LBB39_2
+; X86-X87-NEXT: # %bb.1:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-X87-NEXT: .LBB39_2:
+; X86-X87-NEXT: movl $0, %ecx
+; X86-X87-NEXT: jb .LBB39_4
+; X86-X87-NEXT: # %bb.3:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: .LBB39_4:
+; X86-X87-NEXT: movl %ecx, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl %eax, {{[-0-9]+}}(%e{{[sb]}}p) # 4-byte Spill
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-X87-NEXT: movl $0, %ebx
+; X86-X87-NEXT: jb .LBB39_6
+; X86-X87-NEXT: # %bb.5:
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %ebx
+; X86-X87-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-X87-NEXT: .LBB39_6:
+; X86-X87-NEXT: fldt {{\.LCPI.*}}
+; X86-X87-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-X87-NEXT: fucompp
+; X86-X87-NEXT: fnstsw %ax
+; X86-X87-NEXT: # kill: def $ah killed $ah killed $ax
+; X86-X87-NEXT: sahf
+; X86-X87-NEXT: movl $-1, %eax
+; X86-X87-NEXT: movl $-1, %ebp
+; X86-X87-NEXT: movl $-1, %edi
+; X86-X87-NEXT: movl $-1, %esi
+; X86-X87-NEXT: ja .LBB39_8
+; X86-X87-NEXT: # %bb.7:
+; X86-X87-NEXT: movl %ebx, %eax
+; X86-X87-NEXT: movl %edx, %ebp
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %edi # 4-byte Reload
+; X86-X87-NEXT: movl {{[-0-9]+}}(%e{{[sb]}}p), %esi # 4-byte Reload
+; X86-X87-NEXT: .LBB39_8:
+; X86-X87-NEXT: movl %esi, 12(%ecx)
+; X86-X87-NEXT: movl %edi, 8(%ecx)
+; X86-X87-NEXT: movl %ebp, 4(%ecx)
+; X86-X87-NEXT: movl %eax, (%ecx)
+; X86-X87-NEXT: movl %ecx, %eax
+; X86-X87-NEXT: addl $60, %esp
+; X86-X87-NEXT: popl %esi
+; X86-X87-NEXT: popl %edi
+; X86-X87-NEXT: popl %ebx
+; X86-X87-NEXT: popl %ebp
+; X86-X87-NEXT: retl $4
+;
+; X86-SSE-LABEL: test_unsigned_i128_f80:
+; X86-SSE: # %bb.0:
+; X86-SSE-NEXT: pushl %ebx
+; X86-SSE-NEXT: pushl %edi
+; X86-SSE-NEXT: pushl %esi
+; X86-SSE-NEXT: subl $48, %esp
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %esi
+; X86-SSE-NEXT: fldt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: fld %st(0)
+; X86-SSE-NEXT: fstpt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Spill
+; X86-SSE-NEXT: fstpt {{[0-9]+}}(%esp)
+; X86-SSE-NEXT: leal {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl %eax, (%esp)
+; X86-SSE-NEXT: calll __fixunsxfti
+; X86-SSE-NEXT: subl $4, %esp
+; X86-SSE-NEXT: fldt {{[-0-9]+}}(%e{{[sb]}}p) # 10-byte Folded Reload
+; X86-SSE-NEXT: xorl %eax, %eax
+; X86-SSE-NEXT: fldz
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucomi %st(1), %st
+; X86-SSE-NEXT: fstp %st(1)
+; X86-SSE-NEXT: movl $0, %ecx
+; X86-SSE-NEXT: movl $0, %edx
+; X86-SSE-NEXT: movl $0, %edi
+; X86-SSE-NEXT: jb .LBB39_2
+; X86-SSE-NEXT: # %bb.1:
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %ecx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edx
+; X86-SSE-NEXT: movl {{[0-9]+}}(%esp), %edi
+; X86-SSE-NEXT: .LBB39_2:
+; X86-SSE-NEXT: fldt {{\.LCPI.*}}
+; X86-SSE-NEXT: fxch %st(1)
+; X86-SSE-NEXT: fucompi %st(1), %st
+; X86-SSE-NEXT: fstp %st(0)
+; X86-SSE-NEXT: movl $-1, %ebx
+; X86-SSE-NEXT: cmoval %ebx, %edi
+; X86-SSE-NEXT: cmoval %ebx, %edx
+; X86-SSE-NEXT: cmoval %ebx, %ecx
+; X86-SSE-NEXT: cmoval %ebx, %eax
+; X86-SSE-NEXT: movl %eax, 12(%esi)
+; X86-SSE-NEXT: movl %ecx, 8(%esi)
+; X86-SSE-NEXT: movl %edx, 4(%esi)
+; X86-SSE-NEXT: movl %edi, (%esi)
+; X86-SSE-NEXT: movl %esi, %eax
+; X86-SSE-NEXT: addl $48, %esp
+; X86-SSE-NEXT: popl %esi
+; X86-SSE-NEXT: popl %edi
+; X86-SSE-NEXT: popl %ebx
+; X86-SSE-NEXT: retl $4
+;
+; X64-LABEL: test_unsigned_i128_f80:
+; X64: # %bb.0:
+; X64-NEXT: subq $40, %rsp
+; X64-NEXT: fldt {{[0-9]+}}(%rsp)
+; X64-NEXT: fld %st(0)
+; X64-NEXT: fstpt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Spill
+; X64-NEXT: fstpt (%rsp)
+; X64-NEXT: callq __fixunsxfti at PLT
+; X64-NEXT: xorl %ecx, %ecx
+; X64-NEXT: fldz
+; X64-NEXT: fldt {{[-0-9]+}}(%r{{[sb]}}p) # 10-byte Folded Reload
+; X64-NEXT: fucomi %st(1), %st
+; X64-NEXT: fstp %st(1)
+; X64-NEXT: cmovbq %rcx, %rdx
+; X64-NEXT: cmovbq %rcx, %rax
+; X64-NEXT: fldt {{.*}}(%rip)
+; X64-NEXT: fxch %st(1)
+; X64-NEXT: fucompi %st(1), %st
+; X64-NEXT: fstp %st(0)
+; X64-NEXT: movq $-1, %rcx
+; X64-NEXT: cmovaq %rcx, %rax
+; X64-NEXT: cmovaq %rcx, %rdx
+; X64-NEXT: addq $40, %rsp
+; X64-NEXT: retq
+ %x = call i128 @llvm.fptoui.sat.i128.f80(x86_fp80 %f)
+ ret i128 %x
+}
More information about the llvm-branch-commits
mailing list